当前位置: 首页 > news >正文

网站整站截图采购销售管理软件

网站整站截图,采购销售管理软件,怎么做接口网站,建筑装饰公司前言 微软目前的graphrag更像个demo#xff0c;数据量大的时候不是很友好的啊#xff0c;所以将milvus接入了graphrag#xff0c;看完这篇文章#xff0c;其他数据库接入应该也没问题 注#xff1a;这篇文章只是在search的时候接入进来#xff0c;index过程或者说整个流…前言 微软目前的graphrag更像个demo数据量大的时候不是很友好的啊所以将milvus接入了graphrag看完这篇文章其他数据库接入应该也没问题 注这篇文章只是在search的时候接入进来index过程或者说整个流程接入有时间再写一遍博客 连接数据库 在graphrag.query.cil.py 文件中我们定位到run_local_search函数中找到 store_entity_semantic_embeddings(entitiesentities, vectorstoredescription_embedding_store ) 将其注释掉然后新加上 if vector_store_type VectorStoreType.Milvus:#自定义实现store_text_semantic_embeddings(Textstext_units, vectorstoredescription_embedding_store,final_documentsfinal_documents)else:store_entity_semantic_embeddings(entitiesentities, vectorstoredescription_embedding_store) 其中vector_store_type是graphrag中向量数据库的选择位于graphrag\vector_stores\typing.py中我们需要手动加上 Milvus milvus class VectorStoreType(str, Enum):The supported vector store types.LanceDB lancedbAzureAISearch azure_ai_searchMilvus milvus 同时对get_vector_store进行修改加入case VectorStoreType.Milvus MilvusVectorStore是自定义类实现milvus的接口后续会讲 classmethoddef get_vector_store(cls, vector_store_type: VectorStoreType | str, kwargs: dict) - LanceDBVectorStore | AzureAISearch:Get the vector store type from a string.match vector_store_type:case VectorStoreType.LanceDB:return LanceDBVectorStore(**kwargs)case VectorStoreType.AzureAISearch:return AzureAISearch(**kwargs)case VectorStoreType.Milvus:return MilvusVectorStore(**kwargs)case _:if vector_store_type in cls.vector_store_types:return cls.vector_store_types[vector_store_type](**kwargs)msg fUnknown vector store type: {vector_store_type}raise ValueError(msg) 然后是store_text_semantic_embeddings函数是对齐store_entity_semantic_embeddings实现的位于graphrag\query\input\loaders\dfs.py中 def store_text_semantic_embeddings(Texts: list[TextUnit],vectorstore: BaseVectorStore,final_documents:DataFrame, ) - BaseVectorStore:Store entity semantic embeddings in a vectorstore.documents []for Text in Texts:matching_rows final_documents[final_documents[id] Text.document_ids[0]]if not matching_rows.empty: #如果存在文章名字 则存入文章名字 否则存入graphrag生成的文本块iddocument_title matching_rows[title].values[0]else:document_title Text.document_idsattributes_dict {document_title: document_title,entity_ids: Text.entity_ids} #除了文章名字 还有文本块中提取的实例idif Text.attributes:attributes_dict.update({**Text.attributes})documents.append(VectorStoreDocument(idText.id,textText.text,vectorText.text_embedding,attributesattributes_dict))vectorstore.load_documents(documentsdocuments) #将文本块数据加载进milvus数据库中return vectorstore 具体代码如下 from graphrag.query.input.loaders.dfs import (store_entity_semantic_embeddings,store_text_semantic_embeddings ) def run_local_search(data_dir: str | None,root_dir: str | None,community_level: int,response_type: str,query: str, ):Run a local search with the given query.data_dir, root_dir, config _configure_paths_and_settings(data_dir, root_dir)data_path Path(data_dir)final_documents pd.read_parquet(data_path / create_final_documents.parquet)final_text_units pd.read_parquet(data_path / create_final_text_units.parquet)final_community_reports pd.read_parquet(data_path / create_final_community_reports.parquet)final_relationships pd.read_parquet(data_path / create_final_relationships.parquet)final_nodes pd.read_parquet(data_path / create_final_nodes.parquet)final_entities pd.read_parquet(data_path / create_final_entities.parquet)final_covariates_path data_path / create_final_covariates.parquetfinal_covariates (pd.read_parquet(final_covariates_path)if final_covariates_path.exists()else None)#不做调整 默认是{}vector_store_args (config.embeddings.vector_store if config.embeddings.vector_store else {})#获取数据库类型 默认VectorStoreType.LanceDB vector_store_type vector_store_args.get(type, VectorStoreType.LanceDB)#初始化数据库 默认获取LanceDBdescription_embedding_store __get_embedding_description_store(vector_store_typevector_store_type,config_argsvector_store_args,)#获取实例entities read_indexer_entities(final_nodes, final_entities, community_level)#covariates 默认{}covariates (read_indexer_covariates(final_covariates)if final_covariates is not Noneelse [])reports read_indexer_reports(final_community_reports, final_nodes, community_level)text_units read_indexer_text_units(final_text_units)relationships read_indexer_relationships(final_relationships)covariates {claims: covariates}if vector_store_type VectorStoreType.Milvus:#自定义实现 将文本块数据存入milvus中store_text_semantic_embeddings(Textstext_units, vectorstoredescription_embedding_store,final_documentsfinal_documents)else:store_entity_semantic_embeddings(entitiesentities, vectorstoredescription_embedding_store)search_engine get_local_search_engine(config,reportsreports,text_unitstext_units,entitiesentities,relationshipsrelationships,covariatescovariates,description_embedding_storedescription_embedding_store,response_typeresponse_type,)result search_engine.search(queryquery,method_typemethod_type)reporter.success(fLocal Search Response: {result.response})return result 然后在graphrag.vector_stores中创建个milvus.py文件我们实现一个MilvusVectorStore类 from pymilvus import connections, FieldSchema, CollectionSchema, DataType, Collection,utility from pymilvus import MilvusClient from tqdm import tqdm from datetime import datetime from pymilvus import AnnSearchRequest from typing import Any from .base import (BaseVectorStore,VectorStoreDocument,VectorStoreSearchResult, ) from graphrag.model.types import TextEmbedder import jsonfrom xinference.client import Client #不是必要 client Client(http://0.0.0.0:9997) list_models_run client.list_models() model_uid list_models_run[bge-m3][id] embedding_client client.get_model(model_uid)class MilvusVectorStore(BaseVectorStore):def __init__(self,url:str0.0.0.0,collection_name:strdata_store,recrate:boolFalse,key_word_flag:boolTrue):self.key_word_flag key_word_flagconnections.connect(hosturl, port19530)self.has_collection utility.has_collection(collection_name) #判断是否存在collectionprint(fhas_collection{self.has_collection})if recrate and self.has_collection :s input(fAre you sure delete {collection_name}, yes or no \n )if s yes:self.delete_collection(collection_name)print(f删除{collection_name}成功)if not recrate and self.has_collection: #判断是否存在collection_nameself.collection Collection(namecollection_name)else:schema self.get_schema()self.collection Collection(namecollection_name, schemaschema)def get_schema(self):id FieldSchema(nameid, dtypeDataType.INT64,is_primaryTrue,auto_idTrue) # 主键索引graph_id FieldSchema(namegraph_id, dtypeDataType.VARCHAR,max_length128)text FieldSchema(nametext, dtypeDataType.VARCHAR,max_length58192)file_name FieldSchema(namefile_name, dtypeDataType.VARCHAR,max_length512)text_embedding FieldSchema(nametext_embedding, dtypeDataType.FLOAT_VECTOR,dim1024) # 向量dim2代表向量只有两列自己的数据的话一个向量有多少个元素就多少列#n_tokens FieldSchema(namen_tokens, dtypeDataType.INT64)if self.key_word_flag:key_word FieldSchema(namekey_word, dtypeDataType.VARCHAR, max_length8192)key_word_embedding FieldSchema(namekey_word_embedding, dtypeDataType.FLOAT_VECTOR,dim1024)schema CollectionSchema(fields[id,graph_id,text,file_name,text_embedding,key_word,key_word_embedding], description文本与文本嵌入存储) # 描述else:schema CollectionSchema(fields[id, graph_id,text, file_name, text_embedding],description文本与文本嵌入存储) # 描述return schemadef change_collection(self,collection_name):schema self.get_schema()self.collection Collection(namecollection_name,schemaschema)def delete_collection(self,collection_name):utility.drop_collection(collection_name)def release_collection(self):# self.collection.release_collection(collection_name)self.collection.release()def list_collections(self):collections_list utility.list_collections()return collections_listdef create_index(self,metric_typeL2,index_nameL2):#utility.drop_collection(collection_namecollection_name)# self.collection Collection(namecollection_name, schemaschema)index_params {index_type: AUTOINDEX,metric_type:metric_type,params: {}}self.collection.create_index(field_nametext_embedding,index_paramsindex_params,index_name text_embedding)if self.key_word_flag:self.collection.create_index(field_namekey_word_embedding,index_paramsindex_params,index_namekey_word_embedding)self.collection.load() #load_fields[id,text_embedding]def drop_index(self):self.collection.release()self.collection.drop_index()def insert_data(self,data_dict:dict):#text_id_list,text_list,file_name_list,text_embedding_list,key_word_list,key_word_embedding_liststart datetime.now()self.collection.insert(data_dict)# if self.key_word_flag:# for id,text,file_name,text_embedding,key_word,key_word_embedding in zip(text_id_list,text_list,file_name_list,text_embedding_list,key_word_list,key_word_embedding_list):# self.collection.insert([[id],[text],[file_name],[text_embedding],[key_word],[key_word_embedding]])# else:# for id,text,file_name,text_embedding in zip(text_id_list,text_list,file_name_list,text_embedding_list):# self.collection.insert([[id],[text],[file_name],[text_embedding]])end datetime.now()print(f插入数据消化时间{end-start})def search(self,query_embedding, top_k10,metric_typeL2):search_params {metric_type: metric_type,params: {level: 2}}results self.collection.search([query_embedding],anns_fieldtext_embedding,paramsearch_params,limittop_k,output_fields[graph_id,text, file_name,text_embedding])[0]return resultsdef hybrid_search(self, query_dense_embedding, query_sparse_embedding, rerank,top_k10, metric_typeL2):dense_search_params {index_type: AUTOINDEX,metric_type:metric_type,params: {}}# dense_req self.collection.search( [query_dense_embedding],# anns_fieldtext_embedding,# paramdense_search_params,# limittop_k,# output_fields[text, file_name])dense_req AnnSearchRequest([query_dense_embedding], text_embedding, dense_search_params, limittop_k)sparse_search_params {index_type: AUTOINDEX,metric_type:metric_type,params: {}}# sparse_req self.collection.search( [query_sparse_embedding],# anns_fieldtext_embedding,# paramsparse_search_params,# limittop_k,# output_fields[text, file_name])sparse_req AnnSearchRequest([query_sparse_embedding], key_word_embedding, sparse_search_params, limittop_k)res self.collection.hybrid_search([dense_req,sparse_req],rerankrerank, limittop_k, output_fields[text, file_name])[0]return resdef reranker_init(self,model_name_or_path,devicecpu):self.reranker bge_rf BGERerankFunction(model_namemodel_name_or_path, # Specify the model name. Defaults to BAAI/bge-reranker-v2-m3.devicecpu # Specify the device to use, e.g., cpu or cuda:0)def rereank(self,query,serach_result,top_k,rerank_clientNone):documents_list [i.entity.get(text) for i in serach_result]#如果外部传入非milvus集成的rerankif rerank_client:response rerank_client.rerank(queryquery,documentsdocuments_list,top_ntop_k,)rerank_results response[results]results []for i in rerank_results:index i[index]results.append(serach_result[index])h 1else:results self.reranker(queryquery,documentsdocuments_list,top_ktop_k,)return resultsdef filter_by_id(self, include_ids: list[str] | list[int]) - Any:Build a query filter to filter documents by id.if len(include_ids) 0:self.query_filter Noneelse:if isinstance(include_ids[0], str):id_filter , .join([f{id} for id in include_ids])self.query_filter fid in ({id_filter})else:self.query_filter (fid in ({, .join([str(id) for id in include_ids])}))return self.query_filterdef connect(self,url:str0.0.0.0,collection_name:strdata_store,recrate:boolFalse,key_word_flag:boolFalse,**kwargs: Any) - Any:self.key_word_flag key_word_flagconnections.connect(hosturl, port19530)has_collection utility.has_collection(collection_name) #判断是否存在collectionif recrate and has_collection :s input(fAre you sure delete {collection_name}, yes or no \n )if s yes:self.delete_collection(collection_name)print(f删除{collection_name}成功)if not recrate and has_collection: #判断是否存在collection_nameself.collection Collection(namecollection_name)else:schema self.get_schema()self.collection Collection(namecollection_name, schemaschema)self.create_index()def load_documents(self, documents: list[VectorStoreDocument], overwrite: bool True) - None:Load documents into vector storage.documents [documentfor document in documentsif document.vector is not None]if self.has_collection:s input(fAre you want to insert data, yes or no \n )if s yes:batch 100documents_len len(documents)insert_len int(documents_len / batch) #milvus 一次性不能插入太多数据 所以需要分批次插入data_list list()start datetime.now()print(f插入数据中***)for document in documents:attributes document.attributesfile_name attributes.get(document_title)[0]temp_dict {graph_id: document.id,text: document.text,text_embedding: document.vector,file_name: file_name,}data_list.append(temp_dict)if len(data_list) insert_len:self.collection.insert(data_list)data_list []if data_list: # 防止还有数据self.collection.insert(data_list)end datetime.now()print(f插入数据消化时间{end-start})def similarity_search_by_text(self, text: str, text_embedder: TextEmbedder, k: int 10, **kwargs: Any) - list[VectorStoreSearchResult]:Perform a similarity search using a given input text.query_embedding embedding_client.create_embedding([text])[data][0][embedding]if query_embedding:search_result self.similarity_search_by_vector(query_embedding, k)return search_resultreturn []def similarity_search_by_vector(self, query_embedding: list[float], k: int 10, **kwargs: Any) - list[VectorStoreSearchResult]:docs self.search(query_embeddingquery_embedding,top_kk)result []for doc in docs:file_name doc.entity.get(file_name)attributes {document_title:file_name,entity:[]}score abs(float(doc.score))temp VectorStoreSearchResult(documentVectorStoreDocument(iddoc.entity.get(graph_id),textdoc.entity.get(text),vectordoc.entity.get(text_embedding),attributesattributes,),scorescore,)result.append(temp)# return [# VectorStoreSearchResult(# documentVectorStoreDocument(# iddoc[id],# textdoc[text],# vectordoc[vector],# attributesjson.loads(doc[attributes]),# ),# score1 - abs(float(doc[_distance])),# )# for doc in docs# ]return resultdef similarity_search_by_query(self, query: str, text_embedder: TextEmbedder, k: int 10, **kwargs: Any) - list[VectorStoreSearchResult]:h 1def similarity_search_by_hybrid(self, query: str, text_embedder: TextEmbedder, k: int 10,oversample_scaler:int10, **kwargs: Any) - list[VectorStoreSearchResult]:h 1修改搜索代码 找到graphrag\query\structured_search\local_search\mixed_context.py文件 或者在graphrag\query\cli.py的run_local_search函数中的get_local_search_engine跳转 找到get_local_search_engine函数的return中的LocalSearchMixedContext跳转就到了该类的实现代码 定位到build_context函数的map_query_to_entities进行跳转到函数实现位于graphrag\query\context_builder\entity_extraction.py中找到 matched get_entity_by_key(entitiesall_entities, #所有的Entitykeyembedding_vectorstore_key,valueresult.document.id,) 修改成 entity_ids result.document.attributes.get(entity_ids)if entity_ids:for entity_id in entity_ids:matched get_entity_by_key(entitiesall_entities, #所有的Entitykeyembedding_vectorstore_key,valueentity_id,)if matched:matched_entities.append(matched) 如果想保留graphrag原本这部分的搜索代码可以像我这个样子 for result in search_results:if method_type text_match:entity_ids result.document.attributes.get(entity_ids)if entity_ids:for entity_id in entity_ids:matched get_entity_by_key(entitiesall_entities, #所有的Entitykeyembedding_vectorstore_key,valueentity_id,)if matched:matched_entities.append(matched)else:matched get_entity_by_key(entitiesall_entities, #所有的Entitykeyembedding_vectorstore_key,valueresult.document.id,)if matched:matched_entities.append(matched) 加行参数进行控制或者根据vector_store_type进行控制最后修改map_query_to_entities函数的return加上search_results def map_query_to_entities(query: str,text_embedding_vectorstore: BaseVectorStore,text_embedder: BaseTextEmbedding,all_entities: list[Entity],embedding_vectorstore_key: str EntityVectorStoreKey.ID,include_entity_names: list[str] | None None,exclude_entity_names: list[str] | None None,k: int 10,oversample_scaler: int 2,method_type:str|None None, ) - list[Entity]:Extract entities that match a given query using semantic similarity of text embeddings of query and entity descriptions.if include_entity_names is None:include_entity_names []if exclude_entity_names is None:exclude_entity_names []matched_entities []if query ! :# get entities with highest semantic similarity to query# oversample to account for excluded entities# 在graphrag文件夹目录的vector_stores目录下的lancedb文件中查看print(f准备embedding)start_time datetime.now()#返回的是相似的向量search_results text_embedding_vectorstore.similarity_search_by_text(textquery,text_embedderlambda t: text_embedder.embed(t),kk * oversample_scaler,)end_time datetime.now()print(f耗时{end_time-start_time})for result in search_results:if method_type text_match:entity_ids result.document.attributes.get(entity_ids)if entity_ids:for entity_id in entity_ids:matched get_entity_by_key(entitiesall_entities, #所有的Entitykeyembedding_vectorstore_key,valueentity_id,)if matched:matched_entities.append(matched)else:matched get_entity_by_key(entitiesall_entities, #所有的Entitykeyembedding_vectorstore_key,valueresult.document.id,)if matched:matched_entities.append(matched)else:all_entities.sort(keylambda x: x.rank if x.rank else 0, reverseTrue)matched_entities all_entities[:k]# filter out excluded entities# 默认exclude_entity_names []if exclude_entity_names:matched_entities [entityfor entity in matched_entitiesif entity.title not in exclude_entity_names]# add entities in the include_entity listincluded_entities []#默认include_entity_names []for entity_name in include_entity_names:included_entities.extend(get_entity_by_name(all_entities, entity_name))return included_entities matched_entities,search_results #原本没有search_results 不要忘记在graphrag\query\structured_search\local_search\mixed_context.py的build_context函数中修改map_query_to_entities由2个返回值变成了3个 selected_entities,search_results map_query_to_entities(queryquery,text_embedding_vectorstoreself.entity_text_embeddings,text_embedderself.text_embedder,all_entitieslist(self.entities.values()),embedding_vectorstore_keyself.embedding_vectorstore_key,include_entity_namesinclude_entity_names,exclude_entity_namesexclude_entity_names,ktop_k_mapped_entities,oversample_scaler20,#2method_typemethod_type) 在build_context函数末尾找到self._build_text_unit_context函数新加参数传入search_results text_unit_context, text_unit_context_data,document_id_context self._build_text_unit_context(selected_entitiesselected_entities,max_tokenstext_unit_tokens,return_candidate_contextreturn_candidate_context,search_resultssearch_results,method_type method_type) 跳转到该函数的实现位置仍然在mixed_context.py中修改或者替换掉下面的代码 for index, entity in enumerate(selected_entities):if entity.text_unit_ids:for text_id in entity.text_unit_ids:if (text_id not in [unit.id for unit in selected_text_units]and text_id in self.text_units):selected_unit self.text_units[text_id]num_relationships count_relationships(selected_unit, entity, self.relationships)if selected_unit.attributes is None:selected_unit.attributes {}selected_unit.attributes[entity_order] indexselected_unit.attributes[num_relationships] (num_relationships)selected_text_units.append(selected_unit)# sort selected text units by ascending order of entity order and descending order of number of relationshipsselected_text_units.sort(keylambda x: (x.attributes[entity_order], # type: ignore-x.attributes[num_relationships], # type: ignore))for unit in selected_text_units:del unit.attributes[entity_order] # type: ignoredel unit.attributes[num_relationships] # type: ignore 我的建议还是保留着反正我是改成了 if method_type text_match:for index, Text in enumerate(search_results):text_id Text.document.idif (text_id not in [unit.id for unit in selected_text_units]and text_id in self.text_units):selected_unit self.text_units[text_id]if selected_unit.attributes is None:selected_unit.attributes {documnet_title:Text.document.attributes[document_title]}selected_text_units.append(selected_unit)else:for index, entity in enumerate(selected_entities):if entity.text_unit_ids:for text_id in entity.text_unit_ids:if (text_id not in [unit.id for unit in selected_text_units]and text_id in self.text_units):selected_unit self.text_units[text_id]num_relationships count_relationships(selected_unit, entity, self.relationships)if selected_unit.attributes is None:selected_unit.attributes {}selected_unit.attributes[entity_order] indexselected_unit.attributes[num_relationships] (num_relationships)selected_text_units.append(selected_unit)# sort selected text units by ascending order of entity order and descending order of number of relationshipsselected_text_units.sort(keylambda x: (x.attributes[entity_order], # type: ignore-x.attributes[num_relationships], # type: ignore))for unit in selected_text_units:del unit.attributes[entity_order] # type: ignoredel unit.attributes[num_relationships] # type: ignore 然后就更换掉了向量数据库了传入图数据啥的代码量更大 等我有时间再搞 代码量有点大 欢迎大家点赞或收藏~ 大家的点赞或收藏可以鼓励作者加快更新~
文章转载自:
http://www.morning.rlbfp.cn.gov.cn.rlbfp.cn
http://www.morning.syssdz.cn.gov.cn.syssdz.cn
http://www.morning.nrfrd.cn.gov.cn.nrfrd.cn
http://www.morning.sqqdy.cn.gov.cn.sqqdy.cn
http://www.morning.hmfxl.cn.gov.cn.hmfxl.cn
http://www.morning.htbsk.cn.gov.cn.htbsk.cn
http://www.morning.xckdn.cn.gov.cn.xckdn.cn
http://www.morning.btgxf.cn.gov.cn.btgxf.cn
http://www.morning.qbpqw.cn.gov.cn.qbpqw.cn
http://www.morning.xnzmc.cn.gov.cn.xnzmc.cn
http://www.morning.zxdhp.cn.gov.cn.zxdhp.cn
http://www.morning.mcmpq.cn.gov.cn.mcmpq.cn
http://www.morning.symgk.cn.gov.cn.symgk.cn
http://www.morning.tmfm.cn.gov.cn.tmfm.cn
http://www.morning.trrpb.cn.gov.cn.trrpb.cn
http://www.morning.rdfq.cn.gov.cn.rdfq.cn
http://www.morning.drfcj.cn.gov.cn.drfcj.cn
http://www.morning.ftlgy.cn.gov.cn.ftlgy.cn
http://www.morning.lxwjx.cn.gov.cn.lxwjx.cn
http://www.morning.yzfrh.cn.gov.cn.yzfrh.cn
http://www.morning.rszyf.cn.gov.cn.rszyf.cn
http://www.morning.bytgy.com.gov.cn.bytgy.com
http://www.morning.fwwkr.cn.gov.cn.fwwkr.cn
http://www.morning.ctbr.cn.gov.cn.ctbr.cn
http://www.morning.rknjx.cn.gov.cn.rknjx.cn
http://www.morning.jfbbq.cn.gov.cn.jfbbq.cn
http://www.morning.bloao.com.gov.cn.bloao.com
http://www.morning.gnkdp.cn.gov.cn.gnkdp.cn
http://www.morning.zyffq.cn.gov.cn.zyffq.cn
http://www.morning.fgqbx.cn.gov.cn.fgqbx.cn
http://www.morning.htjwz.cn.gov.cn.htjwz.cn
http://www.morning.reababy.com.gov.cn.reababy.com
http://www.morning.24vy.com.gov.cn.24vy.com
http://www.morning.mtdfn.cn.gov.cn.mtdfn.cn
http://www.morning.pngdc.cn.gov.cn.pngdc.cn
http://www.morning.kaakyy.com.gov.cn.kaakyy.com
http://www.morning.cgtrz.cn.gov.cn.cgtrz.cn
http://www.morning.nhpmn.cn.gov.cn.nhpmn.cn
http://www.morning.jpbky.cn.gov.cn.jpbky.cn
http://www.morning.fsbns.cn.gov.cn.fsbns.cn
http://www.morning.zrwlz.cn.gov.cn.zrwlz.cn
http://www.morning.thntp.cn.gov.cn.thntp.cn
http://www.morning.bwmm.cn.gov.cn.bwmm.cn
http://www.morning.npgwb.cn.gov.cn.npgwb.cn
http://www.morning.zrqs.cn.gov.cn.zrqs.cn
http://www.morning.flqkp.cn.gov.cn.flqkp.cn
http://www.morning.nrchx.cn.gov.cn.nrchx.cn
http://www.morning.hhpkb.cn.gov.cn.hhpkb.cn
http://www.morning.gwsfq.cn.gov.cn.gwsfq.cn
http://www.morning.ggqcg.cn.gov.cn.ggqcg.cn
http://www.morning.kdnrc.cn.gov.cn.kdnrc.cn
http://www.morning.zwpzy.cn.gov.cn.zwpzy.cn
http://www.morning.byywt.cn.gov.cn.byywt.cn
http://www.morning.dcmnl.cn.gov.cn.dcmnl.cn
http://www.morning.rkdw.cn.gov.cn.rkdw.cn
http://www.morning.fkyrk.cn.gov.cn.fkyrk.cn
http://www.morning.nmfwm.cn.gov.cn.nmfwm.cn
http://www.morning.qnqt.cn.gov.cn.qnqt.cn
http://www.morning.dxsyp.cn.gov.cn.dxsyp.cn
http://www.morning.mxtjl.cn.gov.cn.mxtjl.cn
http://www.morning.nicetj.com.gov.cn.nicetj.com
http://www.morning.lbzgt.cn.gov.cn.lbzgt.cn
http://www.morning.pcjw.cn.gov.cn.pcjw.cn
http://www.morning.lwrcg.cn.gov.cn.lwrcg.cn
http://www.morning.fxzw.cn.gov.cn.fxzw.cn
http://www.morning.dpfr.cn.gov.cn.dpfr.cn
http://www.morning.yjmlg.cn.gov.cn.yjmlg.cn
http://www.morning.bhqlj.cn.gov.cn.bhqlj.cn
http://www.morning.nfbkp.cn.gov.cn.nfbkp.cn
http://www.morning.rfyff.cn.gov.cn.rfyff.cn
http://www.morning.zbtfz.cn.gov.cn.zbtfz.cn
http://www.morning.lmqfq.cn.gov.cn.lmqfq.cn
http://www.morning.hrtwt.cn.gov.cn.hrtwt.cn
http://www.morning.zzfjh.cn.gov.cn.zzfjh.cn
http://www.morning.frtt.cn.gov.cn.frtt.cn
http://www.morning.xxlz.cn.gov.cn.xxlz.cn
http://www.morning.hytr.cn.gov.cn.hytr.cn
http://www.morning.xsfg.cn.gov.cn.xsfg.cn
http://www.morning.fqhbt.cn.gov.cn.fqhbt.cn
http://www.morning.mcwrg.cn.gov.cn.mcwrg.cn
http://www.tj-hxxt.cn/news/276865.html

相关文章:

  • 网站建设公司工资设置重庆网站制作济南
  • 网站上传小马后怎么做东莞推广就莞用服务平台
  • 做本地网站能赚钱么龙元建设集团有限公司网站
  • 建立网站条件wordpress排版错乱
  • 购物网站APPaso如何优化
  • 服务器做的网站 怎么使用2013网站怎么备案
  • 宣传旅游网站建设的观点是什么一门app开发平台
  • 8免费网站建站asp.net网站设计分工
  • 做餐饮公司网站长垣县建站塔山双喜
  • 上海做网站收费网站如何做一张轮播图
  • php钓鱼网站怎么做视频教程软件外包服务是什么意思
  • 怎么理解网站开发湖南宁乡建设局网站
  • 开一家网站建设公司有前景吗招生网站建设板块
  • 网站是否被k庆阳建设局网站
  • jsp做网站实例wordpress小说站模板
  • 有没有什么做高数的网站.net网站开发实训
  • 做文献的ppt模板下载网站网页升级紧急通知域名
  • 网站开发技术期末考试试题最知名的网站推广公司
  • 怎样进网站ftp点餐网站模板
  • 公司品牌网站建设桂林两江四湖象山景区简介
  • 库尔勒网站建设推广wordpress管理页面地址
  • html5网站布局教程商城建站系统多少钱
  • 网站建设运营策划制作网站空间域名
  • 小程序网站做多大尺寸宝安做棋牌网站建设找哪家效益快
  • 门户网站英文贵阳做网站的
  • 什么网站可以做长图滴滴注册网站
  • 规划局网站建设工作总结免费浏览的不良网站
  • 论吉林省网站职能建设网站推广公司排名方案
  • wordpress加速r整站优化代理
  • 徐州网站制作费用陕西交通建设集团网站贴吧