1
Overview
1.1
Objective
It is a simple Windows based system that enables the
Buying Departments to maintain cost and selling prices of products for multiple
countries. E-Test scope of work would be focused on
performance evaluation of the application and database servers executed through
10 business critical transactions for the Application. The objective of the
Performance & Scalability effort is to analyze the reasons for the slow performance of during execution of the
critical business transactions and to find out the scalability of the application
in terms of number of concurrent users.
Scope
Features/Attributes
with in the scope of testing
·
The scope of testing is based on onsite/offshore
process model. According to this model the analysis, design of performance
test, the script development and data requirement will be performed at
offshore. Performance test execution will be carried out at CATE environment at
onsite.
·
Performance tests are based on the assumption
that the entire Post-GUI application will be treated as a black box
·
The scope of testing only includes the Post-GU
application and database server. Load Test with maximum of 50 concurrent users
will be performed to identify the cause of degradation in performance of the
Post-GU application.
·
Stress test will be carried out once the system
is scalable for 50 concurrent users, to determine the breaking point of the
Post-GU application.
·
Endurance test to evaluate the stability,
reliability and availability of the application will be conducted on the system
once scalability is confirmed.
Features/Attributes not with in the scope of testing
·
Testing of client emulation through Citrix
server and its scalability and performance is out of scope with Post-GU
application.
·
The performance evaluation of the client is not
in scope, only the performance of application and database server is within the
scope of P&S activity.
References
ü M&S
Performance Test requirement collection doc
ü Performance
testing proposal- doc
ü High-level
test plan- doc
2
Requirements
2.1
Hardware
SNO
|
Type
|
Hardware Details
|
1.
|
Application Server
|
ü
Intel
Pentium III - 1400 MHz
ü
1 GB
RAM
ü
66 GB
Hard Disk
|
2.
|
Database Server
|
*The
Server setup and the load generation will be at CATE environment (M&S).
SNO
|
Type
|
Configuration
|
Quantity
|
1.
|
Load Generator
|
Pentium4 256MB RAM, 20GB HDD
|
2No(depending on the Virtual
users supported)
|
2.
|
Controller
|
Pentium4 512MB RAM, 20GB HDD
|
1No
|
*The
configuration can vary based on the availability at the CATE environment for
generating load.
2.2
Software
SNO
|
Type
|
Software Details
|
1.
|
App server
|
COM +
|
2.
|
Database server
|
SQL 2000
|
3.
|
Operating System
|
Microsoft Windows 2000 server, Microsoft Windows 2000
professional
|
2.3
Automation Tools
LoadRunner is an
industry leading performance-testing tool from Mercury Interactive. This tool
will be used for testing the Post-GU application of M&S and will be used to
simulate the communication between the client and the COM + server for multiple
users to find the performance problems.
Load Runner V
7.51 will be used for this test. To know more about the Load Runner 7.51,
please refer Appendix A.
2.4
Environment Setup
The hardware
infrastructure for the test environment will be provided by M&S as
mentioned in the section 2.1.
Application Overview
The application
is a perfect two-tier architecture with COM+ application server in the business
logic layer and SQL 2000 database server in the persistent layer. The clients
access the system via client (a thick client) from various geographical
locations. The Post-GU system comprises of an online sub-system and a batch
sub-system. The online subsystem is accessed by the category - merchandisers,
buyers and suppliers using a client. The batch system manages downloads from
mainframe, uploads back into the mainframe, production of CPP reports and
backups.
Test Environment
The application
under test will be deployed in the CATE environment (M&S) and load will be
generated within the same network. Since the application and database are
deployed on two different machines that communicate through a common network
link monitoring the network component between them is out of the scope of
performance evaluation.
For test design
and execution more input will be required from CATE environment for
instrumentation strategy.
Production Infrastructure
3
Approach
To perform base
line tests on the Post-GU system for finding out the scalability, identifying
the cause of performance degradation making a comparative study with Post-GU
system
a) A list of transactions, which are business
critical in the Post-GU system, is to be identified.
b) The workload pattern (Percentage of
transaction Mix) will be defined as a scenario, which needs to be executed in
the Post-GU system to collect baseline statistics.
c) A goal in terms of number of concurrent
connections to the application server is set to 50 users to study the
scalability of the system.
d) Scripts will be generated to simulate the
communication between the COM client and the Application server for the
identified transactions using Load Runner 7.51 and parameterized for multiple
users. These scripts will be used to generate load for performance evaluation
of the application and database servers.
e)
Environment
and data setup for the performance test execution will be validated in the CATE
environment.
f)
Monitoring
tools will be deployed in the CATE environment to collect statistics on the
performance of COM+ and SQL 2000.
g)
A
preliminary report based on analysis consisting of the scalability point and
threshold level of the Post-GU system will be prepared.
h)
An executive summary will be prepared with
findings and recommendations if any, for the improvement of Post-GU
application.
3.1
Script Development
Create Automated Scripts
Automated
scripts will be designed for each transaction mentioned in the requirement
collection document using the Mercury interactive Load runner tool 7.51
simulating the client’s communication with the COM+ server. Single user scripts
will be recorded with the tool and will be parameterized for load generation to
simulate the same transaction for multiple simultaneous traffic. Transaction
dependencies would be identified and would be incorporated in the scripts.
Error handling routines will be added to validate the server responses for the
client’s request from the tool. The scripts will be mapped to the various
scenarios to be tested. Navigation of the transaction would be as specified in
the transaction traversal document.
Script Validation
Scripts will be
tested on the environment where it was generated with multiple users and in the
CATE environment for validating the scripts before actual execution of the
test. The data setup required for load generation is also verified along with
the scripts. The server monitoring scripts would be verified for their output
accuracy.
Create LoadRunner Scenarios
I. Load Testing Scenario
The Post-GU application is to be tested for performance
and scalability for a load 50 concurrent users performing the business critical
transaction. Load will be created on the COM+ application server gradually with
5 users every 2 minute. Each user once loaded in the system will perform the
transaction mentioned in the scenario continuously with a transaction think
time as specified in the transaction traversal document. Once 50 users are
loaded in to the system the load will be maintained for 10 minutes to study the
performance of the Post-GU system at 50 users load.
II. Stress Testing Scenario
This scenario will be executed once the Post-GU system is
scalable. The Post-GU application will be stressed with a load of 50 concurrent
users performing the business critical transaction. Load will be created on the
server gradually with 5 users every 2 minutes. Each user once loaded in the
system will perform the transaction mentioned in the scenario continuously. The
think times between transactions will be eliminated to create a congestion of
requests at the server end. Once 50
users are loaded in to the system the load will be maintained for 5 minutes to
study the performance of the system under stress from 50 users load.
III. Endurance Testing Scenario
This scenario will be executed once the Post-GU system is
scalable. The Post-GU application will be loaded with 25 user and maintained
for 8 hrs to study the reliability and availability of the system under 50
users constant load maintained for a long duration.
3.2
Workload Criteria
Business Function Overview
The following table lists the priority of the business
functions being tested. A high rank implies that the business function must be
tested and is part of the critical path.
Rank
|
Business Function/Operation
|
High
|
Open Product
|
High
|
PSL
|
High
|
Buy
|
High
|
Sell
|
High
|
Authorization
|
High
|
Reconciliation
|
High
|
Price Suggestion
|
High
|
Margin Charges
|
High
|
Generate PSLs
|
High
|
VAT
|
Pre-Production Workload Scenarios
The table below shows the high-level test scenarios that
will be executed in CATE envrionment
Scenario #
|
Scenario Name
|
Number of Concurrent Users
|
Workload
Distribution
|
Duration
|
1.
|
50 User Load Test
|
50 users
|
8% PSL
15% Buy
15% Sell
25% Authorization
25% Reconciliation
4% Price
Suggestion
2% Margin Charges
2% Generate PSLs
4% VAT
|
30 minutes
|
2.
|
50 User Stress Test
|
50 users
|
8% PSL
15% Buy
15% Sell
25% Authorization
25% Reconciliation
4% Price
Suggestion
2% Margin Charges
2% Generate PSLs
4% VAT
|
30 minutes
|
3.
|
20 User Endurance Test
|
20 users
|
8% PSL
15% Buy
15% Sell
25% Authorization
25% Reconciliation
4% Price
Suggestion
2% Margin Charges
2% Generate PSLs
4% VAT
|
8 hours
|
NOTE: The duration implies the length of the test after the
goal number of users is reached.
3.3
Test Execution
Execute Pre-Test Checklist
Execute Scenario
Base
Line tests to validate scripts will be conducted before actual execution of the
scenario. Scripts will be executed based on the scenarios identified in the
workload criteria. Data collected from various monitors will be validated and
correlated. According to the effort estimates a maximum of 3 iterations of test
executions will be performed. After an iteration of execution the results will
be analyzed and changes will be derived to implement for the next iteration.
Capture and deliver test results
For each test execution, the
following LoadRunner summary reports will be delivered to the Client:
1.
Scalability
metrics
·
Transaction Average Response Time Summary vs Load
·
Throughput vs Load
·
Transactions per Second
2.
Error
Statistics
·
System or client side errors vs first time load
when the first error was reported in the tool
3.
Server
Statistics
·
Resource utilization (CPU, memory and Disk)
·
Component monitors
4.
Network
Statistics
·
Network bandwidth utilization
Status Reporting
During the
script development and verification, E-Test team will deliver a daily status
report via e-mail. The status report will be delivered by the close of business
each day. The status report will document the progress of the script
development. All issues and their resolutions will be included in the status
report.
During script execution, E-Test Center will continue to
deliver daily status reports via e-mail. The status reports will include a list
of the scenarios executed during the day as well as any observations from the
test. The transaction summary reports and other performance reports will be
delivered at the completion of each scenario.
Issue Reporting
All issues will
be entered into spreadsheet tracking tool. The issue form will contain the
following information:
¨
Date the Issue was Reported
¨
Name of Tester Reporting the Issue
¨
Status of the Issue
¨
Application Associated with the Issue
¨
Detailed Description of the Issue
¨
Detailed Description of the Issue Resolution
E-Test Center
will provide M&S with the same information on the status reports. If a
critical issue arises, E-Test Center will notify M&S development team
offshore as soon as possible rather than waiting to include it on the status
report. A critical issue is defined as an issue that stops progress and no
workaround can be identified.
4 Test
Deliverables
E-Test
team will provide the following deliverables:
4.1 Test
Plan (this document)
This document will contain plan of action
for the test detailing the approach, hardware and software requirements,
environment details, resource details and schedules etc.
4.2
Preliminary Test Report
A report containing details of the
scalability, transaction(s) performance summaries and list of findings will be
given at the end of every iteration. This report gives detailed information on
whether the system was able to scale up. It also lists performance bottlenecks
if any in the various tiers (application or database) of the architecture.
Sample Performance Report
4.3
Periodic Status Updates through Mails
Updates
on the status of the tests will be sent through mails weekly.
4.4 Final
Test Report
Executive summary of the entire testing
process that was carried on Post-GU application at the CATE environment
indicating the scalability and performance bottlenecks identified.
5
Resources and Schedules
5.1 Personnel
Phases
|
Required
|
Available
|
Duration
|
Requirement
Understanding
|
2
|
2
|
2
days
|
Selection
of Tool
|
2
|
2
|
4
days
|
Analysis
|
2
|
2
|
6
days
|
Test
Design
|
2
|
2
|
6
days
|
Test
Development
|
2
|
2
|
6
days
|
Test
Execution
|
2
|
2
|
15
days
|
Test
Results
|
2
|
2
|
4
days
|
5.2 Equipment
Type
and Attributes
|
Quantity Required
|
Quantity Available
|
Duration
|
Deadline
/ Schedule
|
Load
Runner 7.51
|
50
VU with Controller
|
50
VU with Controller
|
2
weeks
|
|
Load
Generator machines
|
2
|
2
|
2
weeks
|
|
*The
above equipment is the minimal expected in the CATE environment
5.3 Accessories
Type
and Attributes
|
Quantity Required
|
Quantity Available
|
Duration
|
Deadline
/ Schedule
|
Desktop
Machines that can access the CATE environment for installing server
monitoring tools and for test data analysis.
|
2
|
2
|
2
weeks
|
|
5.4 Schedules
Task Name
|
Duration
|
Start Date
|
Finish Date
|
Resource Allocated
|
Requirement
Understanding
|
1.9
days?
|
|
|
|
Requirement
Understanding
|
4 hrs?
|
|
|
|
Prepare High level
project plan
|
8 hrs?
|
|
|
|
Review and acceptance of
project plan
|
3.2 hrs?
|
|
|
|
Deliver High level
project plan
|
0 days?
|
|
|
|
Selection
Of Tool
|
3.2
days?
|
|
|
|
Evaluation of Load
Runner Tool for automation
|
16 hrs?
|
|
|
|
prepare evaluation
document
|
6.4 hrs?
|
|
|
|
Review and accept tool
selection
|
3.2 hrs?
|
|
|
|
Deliver tool evaluation
document
|
0 days?
|
|
|
|
Analysis
|
6
days?
|
|
|
|
Business Transaction
Review
|
12.8 hrs?
|
|
|
|
"Determine Critical
transactions, metrics, workloads"
|
12.8 hrs?
|
|
|
|
"Determine high
level data requirements, environment setup and deployment
considerations"
|
12.8 hrs?
|
|
|
|
Prepare Test Plan
document
|
6.4 hrs?
|
|
|
|
Review and acceptance of
Test Plan document
|
3.2 hrs?
|
|
|
|
Deliver Test Plan Doc
|
0 days?
|
|
|
|
Test
Design
|
6
days?
|
|
|
|
"Determine high
risk architecture areas, instrumentation strategy "
|
12.8 hrs?
|
|
|
|
"Decompose critical
business transactions, Identify data creation process, define repeatable test
strategy, define testing entry/exit criteria, risks and mitigation
strategies, limitations and assumptions"
|
12.8 hrs?
|
|
|
|
Identification and
preparation of Load test scenarios (Scenarios will be prepared by M&S.
Those scenarios will be grouped together for Test execution by)
|
12.8 hrs?
|
|
|
|
Prepare Test design
document
|
6.4 hrs?
|
|
|
|
Review and acceptance of
Test Design document
|
3.2 hrs?
|
|
|
|
Deliver Test Design Doc
|
0 days?
|
|
|
|
Test
Development
|
5.63
days?
|
|
|
|
Build scripts and
validate
|
40 hrs?
|
|
|
|
Review and accept
scripts
|
5 hrs?
|
|
|
|
Deliver Scripts for
execution
|
0 days?
|
|
|
|
Travel
|
1
day?
|
|
|
|
Travel to UK
|
1d?
|
|
|
|
Test
Execution
|
14.63
days?
|
|
|
|
Test Environment set-up
and validation
|
16 hrs?
|
|
|
|
Build data and validate
|
16 hrs?
|
|
|
|
Update scripts and test
drivers
|
5 hrs?
|
|
|
|
"Create baseline
test to validate environment, monitors and scripts"
|
8 hrs?
|
|
|
|
Execute tests and
collate test data
|
12 hrs?
|
|
|
|
Analyze data and list
suggestions
|
20 hrs?
|
|
|
|
Review suggestions and
apply changes
|
40 hrs?
|
|
|
|
Travel
|
0.38
days?
|
|
|
|
Travel To India
|
3 hrs?
|
|
|
|
Test
Results
|
4
days?
|
|
|
|
Correlate results
summary
|
12.8 hrs?
|
|
|
|
Prepare Results summary
document
|
12.8hrs?
|
|
|
|
Review and acceptance of
Result summary document
|
6.4hrs?
|
|
|
|
Deliver Result Summary
|
0days?
|
|
|
|
6 Entry,
Exit Criteria and Test Stop Criteria
6.1
Entry Criteria
- Functionally stable application.
- CATE environment availability.
- Post-GU system deployment with
application and database server.
- Data setup for the Business transaction
- All the Performance objectives are met
- Performance and scalability effort
verified.
- Performance and scalability counters
have been collected.
- All test results are documented.
- Submit the test results summary
document and recommendation document for Performance and Scalability of
the Post-GU application.
6.2 Test
Stop Criteria
·
Schedule for using CATE environment exhausted.
·
Test objectives were met
·
Performance bottleneck identified for the cause
of degradation in system scalability.
·
Test results have been analyzed and documented.
All issues have been documented and discussed with M&S
7
Suspension Criteria and Resumption Requirements
Suspension Criteria
|
Resumption Requirements
|
Unstable system
|
Functionally Stable system
|
Test Environment setup delay or issues for testing Post-GU
application and database server in CATE environment
|
Once Test environment is ready perform baseline tests and
actual scenario executions.
|
Delay in providing testing tool license and load generators
as per requirement.
|
Perform tool evaluation, test developed scripts and
continue Test execution once tool is made available and change in schedule
|
Test Data setup invalid
|
Resume the test on valid data.
|
Bottlenecks identified for degrading application
scalability
|
Repeat the test after the defects are fixed or communicated
to the onsite coordinator.
|
Delay in providing the accessories for deploying the
monitors and test data extraction tools.
|
Resume monitoring and test data extraction once the
necessaries are made available
|
8
Risk Assessment
8.1 Schedule
Risk:
Ø The
Test Schedule is subject to delivery of the Software & support to the E-Test
Team on time.
Mitigation Plan – The Test
team will closely monitor the progress of the onsite and offshore development
teams and escalate anticipated delays to the Project Manager in advance. In
case of delays, the team shall spare no effort at completing tasks in time.
Ø Change
in Application might require work of test scripts, which requires additional
efforts.
Mitigation Plan – The effort
involved in the rework of scripts will be intimated to the onsite coordinator
and the test team shall spare no effort at completing tasks in time.
Ø Incase
of server failure during the course of load testing, the testing will come to
an abrupt halt. Additional efforts are required to bring the environment back
up and running.
Mitigation Plan – The effort
involved in restoring the environment will be intimated to the development team
after consultation with support services and the test team shall spare no
effort at completing tasks in time.
8.2 Technology
Risk:
Ø Testing
is subject to the availability of the intranet network connection for the Load
test machines
Mitigation Plan –
The E-Test Team will co-ordinate with the concerned Project manager and conduct
formal meetings with the concerned teams.
Ø Testing
is subject to the availability of the same application to be deployed on which
test scripts were generated.
Mitigation Plan – The Test
Team will contact the onsite coordinator and sort out the issue.
Ø Load
runner 7.51 does not completely support Application Server Monitoring
Mitigation Plan –
The monitoring feature provided inbuilt in the application server or in-house
built monitoring scripts will be used.
8.3 Resource
Risk:
Ø
The E-Test team will require clarifications from
the Project Managers and the Development team during the process of Test case
preparation and execution. The availability of the above people at the right
time is subject to risk.
Mitigation Plan –
The Test Team will co-ordinate with the concerned Project manager and will
ensure their availability for discussions and clarifications.
9 Limitations
and Assumptions
·
Support
from the developers/business analysts for business transaction review for the
list of transactions identified.
·
Performance
test is to be conducted in the CATE test environment.
·
A
functionally stable Post-GU application (equivalent to the one used for
scripting at offshore on which test scripts were created) should be deployed in
the CATE environment.
·
Environment
and Data setup as per the Test Plan for the transactions to be tested will be
done with the support of a coordinator at onsite/offshore.
·
All
tests will be conducted with a volume base as equivalent as that of the
production database set up.
·
Testing
tool Load Runner 7.51 will be made available for the E-Test lab team at onsite
for execution.
·
Machines
for the E-Test team to deploy in built Data extraction tools for data
correlation and analysis will be provided.
·
Access
to the servers will be provided to E-Test team to monitor various layers of the
application test environment.
·
During
Test execution, services that are not related to the Post-GU should be stopped
to isolate the test environment for Post-GU application.
·
onsite
support for Data collection from various layers.
·
onsite
support for analysis if any bottleneck identified at the application level.
·
Batch
Jobs scheduled when is online in the Production environment can be simulated in
CATE-environment.
·
Hardware
bottlenecks if any will be identified and reported. Tests should be performed
only on vertical/horizontal scalable hardware to confirm the resolution of the
same and no recommendations on extrapolation of results will be provided.
·
SQL
statement will be reviewed and tuned if there is degradation in response time.
·
If the
response times of any component method call within the scope of the identified
transactions are unacceptable then they will be analyzed and suggestions to
improve the same will be given as part of the Executive summary report.
You have given plan information in detailed manner as well as in simple words. Thanks for good article.....
ReplyDeleteIts can't be better than this, thanks dude
ReplyDeletePerfect :)
ReplyDeleteGood job and thank for sharing
ReplyDeleteThanks a lot for doing a really helpful plan for us.
ReplyDeleteThanks for sharing this test plan. it really helps!
ReplyDeleteYour information about loadrunner tool is really interesting. Also I want to know the latest new techniques which are implemented in loadrunner. Can you update it in your website?
ReplyDeleteLoadRunner course in Chennai
This comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteValuable information in simple words.
ReplyDeleteThank you all for the comments ...
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteBlog is excellent
ReplyDeleteSachin
this is very helpful
ReplyDeletewhat does CATE environment stands for?
ReplyDeleteThis software testing is very much helpful and I hope this will be a useful information for the needed one.I loved this blog content.I also tested this article information helped to us Keep on modernizing these kinds of informational articles. Selenium Training in Chennai | Selenium Training Institutes in Chennai
ReplyDeleteBeing new to the blogging world I feel like there is still so much to learn. Your tips helped to clarify a few things for me as well iOS App Development Company
ReplyDeleteiOS App Development Company
looks like client test plan where client names to be morphed, ignore incase its dummy name surrender Ravi suvvari
ReplyDeleteThis article is very much helpful and i hope this will be an useful information for the needed one. Keep on updating these kinds of informative things...
ReplyDeleteApplication Packagining Training From India
Oracle Adf Training From India
Selenium Training From India
by far the best sample test plan I found. thanks so much.
ReplyDelete
ReplyDeleteYour very own commitment to getting the message throughout came to be rather powerful and have consistently enabled employees just like me to arrive at their desired goals.
Java Training in Chennai | Best Java Training in Chennai
C C++ Training in Chennai | Best C C++ Training in Chennai
This comment has been removed by the author.
ReplyDeleteIt is Very nice and Informative website.Excellent Work
ReplyDeleteSelenium Training in chennai | Selenium Training in annanagar | Selenium Training in omr | Selenium Training in porur | Selenium Training in tambaram | Selenium Training in velachery
Wondeful Explanation with lots of useful information in this blog. Congrats and Keep Rocking.
ReplyDeleteMicrosoft Windows Azure Training | Online Course | Certification in chennai | Microsoft Windows Azure Training | Online Course | Certification in bangalore | Microsoft Windows Azure Training | Online Course | Certification in hyderabad | Microsoft Windows Azure Training | Online Course | Certification in pune
Welcome! Exceptionally supportive counsel inside this article! It is the little changes that produce the biggest changes. Much obliged for sharing!
ReplyDeleteevrmag
perde modelleri
ReplyDeletesms onay
mobil ödeme bozdurma
nft nasıl alınır
ANKARA EVDEN EVE NAKLİYAT
TRAFİK SİGORTASİ
dedektor
Website kurma
aşk kitapları
Great share! Thanks for the information. Keep going!
ReplyDeleteGreat share! Keep sharing!
ReplyDeleteGood content. You write beautiful things.
ReplyDeletemrbahis
mrbahis
hacklink
sportsbet
vbet
sportsbet
korsan taksi
taksi
vbet
salt likit
ReplyDeletesalt likit
dr mood likit
big boss likit
dl likit
dark likit
DOD
Hollanda yurtdışı kargo
ReplyDeleteİrlanda yurtdışı kargo
İspanya yurtdışı kargo
İtalya yurtdışı kargo
Letonya yurtdışı kargo
3H5X16
Litvanya yurtdışı kargo
ReplyDeleteLüksemburg yurtdışı kargo
Macaristan yurtdışı kargo
Malta yurtdışı kargo
Polonya yurtdışı kargo
DQTN2