什么怎么读| 木瓜和什么不能一起吃| 姑姑的老公叫什么| 肌肉抽筋是什么原因| 孕妇可以喝什么汤| 地中海贫血是什么意思| 不感冒是什么意思| 心神不宁是什么意思| 11月24日是什么星座| 贞操锁是什么| 眼睛老跳是什么原因| 市宣传部长是什么级别| 慢性胰腺炎吃什么药效果最好| 印尼用什么货币| 海带吃了有什么好处| 前列腺炎吃什么好| 日字五行属什么| 狗狗的鼻子为什么是湿的| 山炮是什么意思| 谷丙转氨酶偏高吃什么好| 神经性皮炎用什么药好| 969368是什么电话| 蓝莓不能和什么一起吃| 蚂蚁咬了用什么药| 穹隆什么意思| 学无止境是什么意思| 渠道货是什么意思| 吃什么对眼睛近视好| 芥末配什么好吃| 安眠药有什么副作用| 爆炸盐是什么| 胸围98是什么罩杯| 脉细是什么意思| 上焦不通吃什么中成药| 姨妈期能吃什么水果| 共情是什么意思| 孕妇无创检查是什么| 鸟飞到头上什么预兆| 顾里为什么和席城睡了| jz是什么意思| 两榜进士是什么意思| 什么是爬虫| 1944年属什么生肖| 身上长白斑是什么原因造成的| 什么下奶最快最多| 吃茶叶蛋有什么好处和坏处| 胆脂瘤是什么病| 加百列是什么天使| 什么是桥本病| ecg是什么意思| 东北易帜是什么意思| 膈肌痉挛吃什么药| 拔完罐需要注意什么| 耳朵流血是什么原因| 喉咙发炎挂什么科| 韩红什么军衔| 香港脚是什么| 部长什么级别| 牙神经挑了为什么还疼| 眼睛发涩是什么原因导致的| 噻虫高氯氟治什么虫| 奇花初胎矞矞皇皇是什么意思| 晒伤擦什么药| 蹉跎什么意思| 月经来了同房会导致什么后果| 完璧归赵发生在什么时期| 怀孕初期流产有什么症状| 格力空调睡眠模式1234什么意思| 樱桃跟车厘子有什么区别| 属马是什么星座| 吃什么通便| 原住民是什么意思| 什么药可以减肥瘦身| 晚上很难入睡是什么原因| 2月份生日是什么星座| 血糖和血脂有什么区别| 梦见自己在洗澡是什么意思| 石女是什么意思| 耳朵老是痒是什么原因| 白带什么颜色正常| 米诺地尔有什么副作用| 什么品牌的冰箱好| 人流后吃什么恢复快| 下嘴唇有痣代表什么| 为什么一来月经就拉肚子| 辰五行属什么| 翠绿的什么| 杏仁有什么功效| 丹毒病是什么原因引起的| 你喜欢吃什么用英语怎么说| 腿血栓什么症状| 梦见知了猴是什么意思| 印度是什么人种| ochirly是什么品牌| 眼角流泪是什么原因| 疣吃什么药能治好| 黄金为什么这么贵| 开眼镜店需要什么条件| 负离子什么意思| 什么酒不能喝| 陪产假什么时候开始休| 贫血的人来姨妈会有什么症状| 苏轼是什么居士| 公主病是什么意思| 什么样的春天| 什么立雪| 下午两点多是什么时辰| 下午5点是什么时辰| g6pd是什么意思| 什么药止血效果最快| 血糖吃什么水果| 749局是什么| 贴黄瓜片对皮肤有什么好处| 夏天喝什么粥| 世界上最长的蛇是什么| 乙肝25阳性什么意思| 十一月三号是什么星座| 多吃什么可以长高| 三七粉什么时间吃最好| 什么冠禽兽| 大便为什么是绿色的| 做牛排需要什么调料| 嘴角长痘痘是什么原因| 堃字的寓意是什么意思| 6月初三是什么日子| 脑脱髓鞘改变是什么病| 脑梗前有什么征兆| 左侧卵巢多囊样改变什么意思| 太阳光是什么颜色| 什么是牙槽骨突出图片| 梦见假牙掉了是什么意思| 月经期间喝什么比较好| 肩胛骨缝疼挂什么科| 争奇斗艳什么意思| 来例假腰疼是什么原因| 18K金什么意思| 肛门是什么意思| 日文上下结构念什么| 社保基数什么时候调整| 嘴角周围长痘痘是什么原因| 身上发抖是什么原因| 睾丸炎有什么症状| 吃什么补血最好最快| 121什么意思| 细菌性结膜炎用什么药| 额头上长痘痘是什么原因| 心悸是什么病| 尿酸降低是什么意思| 补牙挂什么科| 开学买什么| 软肋骨炎吃什么药对症| 司马迁属什么生肖| 早孕反应什么时候开始| 武松是什么生肖| 多囊卵巢是什么| hbcab偏高是什么意思| 什么是独角兽企业| 始终是什么意思| 宁的五行属性是什么| 孕妇甲胎蛋白偏高说明什么| 舌根苔白厚腻是什么原因| 小孩做ct对身体有什么影响| lr是什么| 布洛芬过量有什么危害| 狮子男和什么星座最配| 一个口一个且念什么| 什么时候开始暑伏| 肠鸣吃什么药| 暴露是什么意思| 肚子疼吐了是什么原因| 耳朵烫是什么预兆| 常吃木耳有什么好处和坏处| 1993属什么生肖| 巴厘岛机场叫什么| 清明节在什么时候| 为什么牛肝便宜没人吃| 宇宙的尽头是什么| 省人大代表是什么级别| 什么是abs| bosco是什么意思| 再生纤维是什么面料| 666什么意思| 最近老放屁是什么原因| 孕妇胃疼可以吃什么药| 慢性咽炎有什么症状| 大哥是什么生肖| 男命正官代表什么| 11月份是什么季节| 什么是拘役| 心脏有早搏吃什么药好| 最大的荔枝是什么品种| 月经喝什么比较好| bbc是什么意思| 人养玉三年玉养人一生是什么意思| 男属鸡的和什么属相最配| 宝宝病毒感染吃什么药效果好| gas什么意思| 狗和什么属相相冲| 下午一点半是什么时辰| 这是什么踏板| 农历八月初三是什么星座| 养殖什么赚钱| 产检挂什么科室| 什么叫腔梗| 葫芦娃的爷爷叫什么| evisu是什么牌子| 什么去甲醛最快有效| 梦见父母是什么意思| 记字五行属什么| 塔罗牌是什么意思| 反流性咽喉炎吃什么药| 肝不好应该吃什么| 鸡属于什么动物| 真菌菌丝阳性什么意思| 空调制冷效果差是什么原因| 今年53岁属什么生肖| 音译是什么意思| 玫瑰痤疮吃什么药| 龋齿是什么意思| 远视是什么意思| 什么的绽放| 客家人为什么叫客家人| 世界上最小的长度单位是什么| 兔子的耳朵像什么| 甲亢和甲状腺有什么区别| 11月18是什么星座| 辛辣的辛指什么| 孙权字什么| 女人吃鹅蛋有什么好处| 绿卡需要什么条件| 冰箱发烫是什么原因| 水痘疫苗第二针什么时候打| 6月17日是什么星座| ct是什么检查| ITIB跟薇娅什么关系| 吃什么补钾快| 胃阴虚吃什么药| 芊字五行属什么| 10月24日什么星座| 武汉市长是什么级别| 后背筋膜炎吃什么药| 仙人跳是什么意思| 肿瘤手术后吃什么好| 肠炎可以吃什么水果| 心衰竭是什么症状| bodywash是什么意思| 大便不成形是什么原因| 胎儿左心室点状强回声是什么意思| 有什么好用的vpn| 大便拉不干净是什么原因| 龙是什么命| 梦见找鞋子是什么意思| 阴道炎有些什么症状| 内窗是什么意思| 教师的职责是什么| 甲状腺挂什么科| 肝主什么| 纸老虎是什么意思| 二代试管是什么意思| 单活胎是什么意思| 生源地是指什么| 制加手念什么| 竹蔗是什么| 有鸟飞进屋是什么预兆| 百度

关于2017年国民经济和社会发展计划执行情况与2018年国民经济和社会发展计划草案的报告(1)

百度 在2001加盟利物浦之后,杜德克让利物浦球迷看到了下一个克莱门斯的希望。

In electronics, computer science and computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as μarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor.[1] A given ISA may be implemented with different microarchitectures;[2][3] implementations may vary due to different goals of a given design or due to shifts in technology.[4]

Diagram of the Intel Core 2 microarchitecture

Computer architecture is the combination of microarchitecture and instruction set architecture.

Relation to instruction set architecture

edit
?
A microarchitecture organized around a single bus

The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the instructions, execution model, processor registers, address and data formats among other things. The microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA.

The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be anything from single gates and registers, to complete arithmetic logic units (ALUs) and even larger elements. These diagrams generally separate the datapath (where data is placed) and the control path (which can be said to steer the data).[5]

The person designing a system usually draws the specific microarchitecture as a kind of data flow diagram. Like a block diagram, the microarchitecture diagram shows microarchitectural elements such as the arithmetic and logic unit and the register file as a single schematic symbol. Typically, the diagram connects those elements with arrows, thick lines and thin lines to distinguish between three-state buses (which require a three-state buffer for each device that drives the bus), unidirectional buses (always driven by a single source, such as the way the address bus on simpler computers is always driven by the memory address register), and individual control lines. Very simple computers have a single data bus organization?– they have a single three-state bus. The diagram of more complex computers usually shows multiple three-state buses, which help the machine do more operations simultaneously.

Each microarchitectural element is in turn represented by a schematic describing the interconnections of logic gates used to implement it. Each logic gate is in turn represented by a circuit diagram describing the connections of the transistors used to implement it in some particular logic family. Machines with different microarchitectures may have the same instruction set architecture, and thus be capable of executing the same programs. New microarchitectures and/or circuitry solutions, along with advances in semiconductor manufacturing, are what allows newer generations of processors to achieve higher performance while using the same ISA.

In principle, a single microarchitecture could execute several different ISAs with only minor changes to the microcode.

Aspects

edit
?
Intel 80286 microarchitecture

The pipelined datapath is the most commonly used datapath design in microarchitecture today. This technique is used in most modern microprocessors, microcontrollers, and DSPs. The pipelined architecture allows multiple instructions to overlap in execution, much like an assembly line. The pipeline includes several different stages which are fundamental in microarchitecture designs.[5] Some of these stages include instruction fetch, instruction decode, execute, and write back. Some architectures include other stages such as memory access. The design of pipelines is one of the central microarchitectural tasks.

Execution units are also essential to microarchitecture. Execution units include arithmetic logic units (ALU), floating point units (FPU), load/store units, branch prediction, and SIMD. These units perform the operations or calculations of the processor. The choice of the number of execution units, their latency and throughput is a central microarchitectural design task. The size, latency, throughput and connectivity of memories within the system are also microarchitectural decisions.

System-level design decisions such as whether or not to include peripherals, such as memory controllers, can be considered part of the microarchitectural design process. This includes decisions on the performance-level and connectivity of these peripherals.

Unlike architectural design, where achieving a specific performance level is the main goal, microarchitectural design pays closer attention to other constraints. Since microarchitecture design decisions directly affect what goes into a system, attention must be paid to issues such as chip area/cost, power consumption, logic complexity, ease of connectivity, manufacturability, ease of debugging, and testability.

Microarchitectural concepts

edit

Instruction cycles

edit

To run programs, all single- or multi-chip CPUs:

  1. Read an instruction and decode it
  2. Find any associated data that is needed to process the instruction
  3. Process the instruction
  4. Write the results out

The instruction cycle is repeated continuously until the power is turned off.

Multicycle microarchitecture

edit

Historically, the earliest computers were multicycle designs. The smallest, least-expensive computers often still use this technique. Multicycle architectures often use the least total number of logic elements and reasonable amounts of power. They can be designed to have deterministic timing and high reliability. In particular, they have no pipeline to stall when taking conditional branches or interrupts. However, other microarchitectures often perform more instructions per unit time, using the same logic family. When discussing "improved performance," an improvement is often relative to a multicycle design.

In a multicycle computer, the computer does the four steps in sequence, over several cycles of the clock. Some designs can perform the sequence in two clock cycles by completing successive stages on alternate clock edges, possibly with longer operations occurring outside the main cycle. For example, stage one on the rising edge of the first cycle, stage two on the falling edge of the first cycle, etc.

In the control logic, the combination of cycle counter, cycle state (high or low) and the bits of the instruction decode register determine exactly what each part of the computer should be doing. To design the control logic, one can create a table of bits describing the control signals to each part of the computer in each cycle of each instruction. Then, this logic table can be tested in a software simulation running test code. If the logic table is placed in a memory and used to actually run a real computer, it is called a microprogram. In some computer designs, the logic table is optimized into the form of combinational logic made from logic gates, usually using a computer program that optimizes logic. Early computers used ad-hoc logic design for control until Maurice Wilkes invented this tabular approach and called it microprogramming.[6]

Increasing execution speed

edit

Complicating this simple-looking series of steps is the fact that the memory hierarchy, which includes caching, main memory and non-volatile storage like hard disks (where the program instructions and data reside), has always been slower than the processor itself. Step (2) often introduces a lengthy (in CPU terms) delay while the data arrives over the computer bus. A considerable amount of research has been put into designs that avoid these delays as much as possible. Over the years, a central goal was to execute more instructions in parallel, thus increasing the effective execution speed of a program. These efforts introduced complicated logic and circuit structures. Initially, these techniques could only be implemented on expensive mainframes or supercomputers due to the amount of circuitry needed for these techniques. As semiconductor manufacturing progressed, more and more of these techniques could be implemented on a single semiconductor chip. See Moore's law.

Instruction set choice

edit

Instruction sets have shifted over the years, from originally very simple to sometimes very complex (in various respects). In recent years, load–store architectures, VLIW and EPIC types have been in fashion. Architectures that are dealing with data parallelism include SIMD and Vectors. Some labels used to denote classes of CPU architectures are not particularly descriptive, especially so the CISC label; many early designs retroactively denoted "CISC" are in fact significantly simpler than modern RISC processors (in several respects).

However, the choice of instruction set architecture may greatly affect the complexity of implementing high-performance devices. The prominent strategy, used to develop the first RISC processors, was to simplify instructions to a minimum of individual semantic complexity combined with high encoding regularity and simplicity. Such uniform instructions were easily fetched, decoded and executed in a pipelined fashion and a simple strategy to reduce the number of logic levels in order to reach high operating frequencies; instruction cache-memories compensated for the higher operating frequency and inherently low code density while large register sets were used to factor out as much of the (slow) memory accesses as possible.

Instruction pipelining

edit

One of the first, and most powerful, techniques to improve performance is the use of instruction pipelining. Early processor designs would carry out all of the steps above for one instruction before moving onto the next. Large portions of the circuitry were left idle at any one step; for instance, the instruction decoding circuitry would be idle during execution and so on.

Pipelining improves performance by allowing a number of instructions to work their way through the processor at the same time. In the same basic example, the processor would start to decode (step 1) a new instruction while the last one was waiting for results. This would allow up to four instructions to be "in flight" at one time, making the processor look four times as fast. Although any one instruction takes just as long to complete (there are still four steps) the CPU as a whole "retires" instructions much faster.

RISC makes pipelines smaller and much easier to construct by cleanly separating each stage of the instruction process and making them take the same amount of time—one cycle. The processor as a whole operates in an assembly line fashion, with instructions coming in one side and results out the other. Due to the reduced complexity of the classic RISC pipeline, the pipelined core and an instruction cache could be placed on the same size die that would otherwise fit the core alone on a CISC design. This was the real reason that RISC was faster. Early designs like the SPARC and MIPS often ran over 10 times as fast as Intel and Motorola CISC solutions at the same clock speed and price.

Pipelines are by no means limited to RISC designs. By 1986 the top-of-the-line VAX implementation (VAX 8800) was a heavily pipelined design, slightly predating the first commercial MIPS and SPARC designs. Most modern CPUs (even embedded CPUs) are now pipelined, and microcoded CPUs with no pipelining are seen only in the most area-constrained embedded processors.[examples needed] Large CISC machines, from the VAX 8800 to the modern Intel and AMD processors, are implemented with both microcode and pipelines. Improvements in pipelining and caching are the two major microarchitectural advances that have enabled processor performance to keep pace with the circuit technology on which they are based.

Cache

edit

It was not long before improvements in chip manufacturing allowed for even more circuitry to be placed on the die, and designers started looking for ways to use it. One of the most common was to add an ever-increasing amount of cache memory on-die. Cache is very fast and expensive memory. It can be accessed in a few cycles as opposed to many needed to "talk" to main memory. The CPU includes a cache controller which automates reading and writing from the cache. If the data is already in the cache it is accessed from there – at considerable time savings, whereas if it is not the processor is "stalled" while the cache controller reads it in.

RISC designs started adding cache in the mid-to-late 1980s, often only 4?KB in total. This number grew over time, and typical CPUs now have at least 2 MB, while more powerful CPUs come with 4 or 6 or 12MB or even 32MB or more, with the most being 768MB in the newly released EPYC Milan-X line, organized in multiple levels of a memory hierarchy. Generally speaking, more cache means more performance, due to reduced stalling.

Caches and pipelines were a perfect match for each other. Previously, it didn't make much sense to build a pipeline that could run faster than the access latency of off-chip memory. Using on-chip cache memory instead, meant that a pipeline could run at the speed of the cache access latency, a much smaller length of time. This allowed the operating frequencies of processors to increase at a much faster rate than that of off-chip memory.

Branch prediction

edit

One barrier to achieving higher performance through instruction-level parallelism stems from pipeline stalls and flushes due to branches. Normally, whether a conditional branch will be taken isn't known until late in the pipeline as conditional branches depend on results coming from a register. From the time that the processor's instruction decoder has figured out that it has encountered a conditional branch instruction to the time that the deciding register value can be read out, the pipeline needs to be stalled for several cycles, or if it's not and the branch is taken, the pipeline needs to be flushed. As clock speeds increase the depth of the pipeline increases with it, and some modern processors may have 20 stages or more. On average, every fifth instruction executed is a branch, so without any intervention, that's a high amount of stalling.

Techniques such as branch prediction and speculative execution are used to lessen these branch penalties. Branch prediction is where the hardware makes educated guesses on whether a particular branch will be taken. In reality one side or the other of the branch will be called much more often than the other. Modern designs have rather complex statistical prediction systems, which watch the results of past branches to predict the future with greater accuracy. The guess allows the hardware to prefetch instructions without waiting for the register read. Speculative execution is a further enhancement in which the code along the predicted path is not just prefetched but also executed before it is known whether the branch should be taken or not. This can yield better performance when the guess is good, with the risk of a huge penalty when the guess is bad because instructions need to be undone.

Superscalar

edit

Even with all of the added complexity and gates needed to support the concepts outlined above, improvements in semiconductor manufacturing soon allowed even more logic gates to be used.

In the outline above the processor processes parts of a single instruction at a time. Computer programs could be executed faster if multiple instructions were processed simultaneously. This is what superscalar processors achieve, by replicating functional units such as ALUs. The replication of functional units was only made possible when the die area of a single-issue processor no longer stretched the limits of what could be reliably manufactured. By the late 1980s, superscalar designs started to enter the market place.

In modern designs it is common to find two load units, one store (many instructions have no results to store), two or more integer math units, two or more floating point units, and often a SIMD unit of some sort. The instruction issue logic grows in complexity by reading in a huge list of instructions from memory and handing them off to the different execution units that are idle at that point. The results are then collected and re-ordered at the end.

Out-of-order execution

edit

The addition of caches reduces the frequency or duration of stalls due to waiting for data to be fetched from the memory hierarchy, but does not get rid of these stalls entirely. In early designs a cache miss would force the cache controller to stall the processor and wait. Of course there may be some other instruction in the program whose data is available in the cache at that point. Out-of-order execution allows that ready instruction to be processed while an older instruction waits on the cache, then re-orders the results to make it appear that everything happened in the programmed order. This technique is also used to avoid other operand dependency stalls, such as an instruction awaiting a result from a long latency floating-point operation or other multi-cycle operations.

Register renaming

edit

Register renaming refers to a technique used to avoid unnecessary serialized execution of program instructions because of the reuse of the same registers by those instructions. Suppose we have two groups of instruction that will use the same register. One set of instructions is executed first to leave the register to the other set, but if the other set is assigned to a different similar register, both sets of instructions can be executed in parallel (or) in series.

Multiprocessing and multithreading

edit

Computer architects have become stymied by the growing mismatch in CPU operating frequencies and DRAM access times. None of the techniques that exploited instruction-level parallelism (ILP) within one program could make up for the long stalls that occurred when data had to be fetched from main memory. Additionally, the large transistor counts and high operating frequencies needed for the more advanced ILP techniques required power dissipation levels that could no longer be cheaply cooled. For these reasons, newer generations of computers have started to exploit higher levels of parallelism that exist outside of a single program or program thread.

This trend is sometimes known as throughput computing. This idea originated in the mainframe market where online transaction processing emphasized not just the execution speed of one transaction, but the capacity to deal with massive numbers of transactions. With transaction-based applications such as network routing and web-site serving greatly increasing in the last decade, the computer industry has re-emphasized capacity and throughput issues.

One technique of how this parallelism is achieved is through multiprocessing systems, computer systems with multiple CPUs. Once reserved for high-end mainframes and supercomputers, small-scale (2–8) multiprocessors servers have become commonplace for the small business market. For large corporations, large scale (16–256) multiprocessors are common. Even personal computers with multiple CPUs have appeared since the 1990s.

With further transistor size reductions made available with semiconductor technology advances, multi-core CPUs have appeared where multiple CPUs are implemented on the same silicon chip. Initially used in chips targeting embedded markets, where simpler and smaller CPUs would allow multiple instantiations to fit on one piece of silicon. By 2005, semiconductor technology allowed dual high-end desktop CPUs CMP chips to be manufactured in volume. Some designs, such as Sun Microsystems' UltraSPARC T1 have reverted to simpler (scalar, in-order) designs in order to fit more processors on one piece of silicon.

Another technique that has become more popular recently is multithreading. In multithreading, when the processor has to fetch data from slow system memory, instead of stalling for the data to arrive, the processor switches to another program or program thread which is ready to execute. Though this does not speed up a particular program/thread, it increases the overall system throughput by reducing the time the CPU is idle.

Conceptually, multithreading is equivalent to a context switch at the operating system level. The difference is that a multithreaded CPU can do a thread switch in one CPU cycle instead of the hundreds or thousands of CPU cycles a context switch normally requires. This is achieved by replicating the state hardware (such as the register file and program counter) for each active thread.

A further enhancement is simultaneous multithreading. This technique allows superscalar CPUs to execute instructions from different programs/threads simultaneously in the same cycle.

See also

edit

References

edit
  1. ^ Curriculum Guidelines for Undergraduate Degree Programs in Computer Engineering (PDF). Association for Computing Machinery. 2004. p.?60. Archived from the original (PDF) on 2025-08-14. Comments on Computer Architecture and Organization: Computer architecture is a key component of computer engineering and the practicing computer engineer should have a practical understanding of this topic...
  2. ^ Murdocca, Miles; Heuring, Vincent (2007). Computer Architecture and Organization, An Integrated Approach. Wiley. p.?151. ISBN?9780471733881.
  3. ^ Clements, Alan. Principles of Computer Hardware (4th?ed.). pp.?1–2.
  4. ^ Flynn, Michael J. (2007). "An Introduction to Architecture and Machines". Computer Architecture Pipelined and Parallel Processor Design. Jones and Bartlett. pp.?1–3. ISBN?9780867202045.
  5. ^ a b Hennessy, John L.; Patterson, David A. (2006). Computer Architecture: A Quantitative Approach (4th?ed.). Morgan Kaufmann. ISBN?0-12-370490-1.
  6. ^ Wilkes, M. V. (1969). "The Growth of Interest in Microprogramming: A Literature Survey". ACM Computing Surveys. 1 (3): 139–145. doi:10.1145/356551.356553. S2CID?10673679.

Further reading

edit
石榴花是什么季节开的 肠胃不好吃什么水果比较好 欧尼什么意思 大美是什么意思 扁平疣是什么原因长出来的
头晃动是什么病的前兆 治疗阳痿早泄什么药最好 保重适合对什么人说 神经内科主要看什么病 韦编三绝什么意思
三焦湿热吃什么中成药 平时血压高突然变低什么原因 一级医院是什么医院 摸头杀是什么意思 上火了吃什么食物降火
一什么鹿角 月加亏念什么 萎缩性胃炎吃什么食物好 前列腺增生用什么药好 乳头痒什么原因
cos是什么意思liaochangning.com 梦见自己拉粑粑是什么意思hcv8jop7ns8r.cn 乌龟为什么喜欢叠罗汉hcv8jop4ns8r.cn 试管移植后吃什么容易着床hanqikai.com 什么情况下做胃镜hcv8jop0ns4r.cn
背水一战什么意思hcv9jop5ns3r.cn 菊花茶为什么会变绿色hcv7jop5ns0r.cn 女生说6524是什么意思hcv7jop5ns0r.cn 闻字五行属什么hcv8jop8ns0r.cn 打卡什么意思hcv8jop6ns9r.cn
冰柜什么牌子好0297y7.com 一马平川是什么意思hcv8jop9ns7r.cn gap是什么档次的牌子hcv9jop2ns6r.cn 年轻人为什么会低血压ff14chat.com 乙型肝炎病毒表面抗体阳性是什么意思hcv9jop0ns2r.cn
鱿鱼和什么炒好吃hcv9jop6ns4r.cn 沉鱼落雁闭月羞花是什么意思hcv9jop2ns0r.cn 贡菜是什么hcv8jop9ns6r.cn 不劳而获是什么生肖hcv8jop3ns3r.cn 宝宝流鼻涕吃什么药xinjiangjialails.com
百度