• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions
CryptVolt News
  • Home
  • Cryptocurrency
  • Bitcoin
  • Mining
  • Ethereum
  • Litecoin
  • NFT
  • Blockchain
  • Contact Us
No Result
View All Result
  • Home
  • Cryptocurrency
  • Bitcoin
  • Mining
  • Ethereum
  • Litecoin
  • NFT
  • Blockchain
  • Contact Us
No Result
View All Result
CryptVolt News
No Result
View All Result
Home Blockchain

How you can construct a choice tree style in IBM Db2

reddnbre by reddnbre
April 14, 2023
in Blockchain
0
How you can construct a choice tree style in IBM Db2
189
SHARES
1.5k
VIEWS
Share on FacebookShare on Twitter


After growing a gadget studying style, you want a spot to run your style and serve predictions. In case your corporate is within the early degree of its AI adventure or has price range constraints, you might combat to discover a deployment gadget in your style. Construction ML infrastructure and integrating ML fashions with the bigger trade are main bottlenecks to AI adoption [1,2,3]. IBM Db2 can assist resolve those issues of its integrated ML infrastructure. Anyone with the data of SQL and get admission to to a Db2 example, the place the in-database ML function is enabled, can simply learn how to construct and use a gadget studying style within the database.

On this put up, I will be able to display tips on how to increase, deploy, and use a choice tree style in a Db2 database.

Those are my main steps on this instructional:

  1. Arrange Db2 tables
  2. Discover ML dataset
  3. Preprocess the dataset
  4. Educate a choice tree style
  5. Generate predictions the use of the style
  6. Assessment the style

I carried out those steps in a Db2 Warehouse on-prem database. Db2 Warehouse on cloud additionally helps those ML options.

The gadget studying use case

I will be able to use a dataset of ancient flights in america. For each and every flight, the dataset has data such because the flight’s foundation airport, departure time, flying time, and arrival time. Additionally, a column within the dataset signifies if each and every flight had arrived on time or past due. The use of examples from the dataset, we’ll construct a classification style with determination tree set of rules. As soon as educated, the style can obtain as enter unseen flight information and are expecting if the flight will arrive on time or past due at its vacation spot.

1. Arrange Db2 tables

The dataset I take advantage of on this instructional is to be had right here as a csv report.

Making a Db2 desk

I take advantage of the next SQL for making a desk for storing the dataset.

db2start
connect with <database_name>

db2 "CREATE TABLE FLIGHTS.FLIGHTS_DATA_V3  (
ID INTEGER NOT NULL GENERATED BY DEFAULT AS IDENTITY,
YEAR INTEGER ,
QUARTER INTEGER ,
MONTH INTEGER ,
DAYOFMONTH INTEGER ,                       
DAYOFWEEK INTEGER ,                       
UNIQUECARRIER VARCHAR(50 OCTETS) ,               
ORIGIN VARCHAR(50 OCTETS) ,                      
DEST VARCHAR(50 OCTETS) ,                       
CRSDEPTIME INTEGER ,                       
DEPTIME INTEGER ,                       
DEPDELAY REAL ,                       
DEPDEL15 REAL ,                       
TAXIOUT INTEGER ,                       
WHEELSOFF INTEGER ,                       
CRSARRTIME INTEGER ,                       
CRSELAPSEDTIME INTEGER ,                       
AIRTIME INTEGER ,                       
DISTANCEGROUP INTEGER ,                       
FLIGHTSTATUS VARCHAR(1) )
ORGANIZE BY ROW";

After developing the desk, I take advantage of the next SQL to load the knowledge, from the csv report, into the desk:

db2 "IMPORT FROM 'FLIGHTS_DATA_V3.csv' OF DEL COMMITCOUNT 50000 INSERT INTO FLIGHTS.FLIGHTS_DATA_V3"

I now have the ML dataset loaded into the FLIGHTS.FLIGHTS_DATA_V3 desk in Db2. I’ll reproduction a subset of the information from this desk to a separate desk for the ML style building and analysis, leaving the unique reproduction of the knowledge intact. 

SELECT depend(*) FROM FLIGHTS.FLIGHTS_DATA_V3

 — — — 
1000000

Making a separate desk with pattern information

Create a desk with 10% pattern rows from the above desk. Use the RAND serve as of Db2 for random sampling.

CREATE TABLE FLIGHT.FLIGHTS_DATA AS (SELECT * FROM FLIGHTS.FLIGHTS_DATA_V3 WHERE RAND() < 0.1) WITH DATA

Depend the selection of rows within the pattern desk.

SELECT depend(*) FROM FLIGHT.FLIGHTS_DATA

— — — 
99879

Glance into the scheme definition of the desk.

SELECT NAME, COLTYPE, LENGTH
FROM SYSIBM.SYSCOLUMNS
WHERE TBCREATOR = 'FLIGHT' AND TBNAME = 'FLIGHTS_DATA'
ORDER BY COLNO

FLIGHTSTATUS is the reaction or the objective column. Others are function columns.

To find the DISTINCT values within the goal column.

From those values, I will see that it’s a binary classification activity the place each and every flight arrived both on time or past due. 

To find the frequencies of distinct values within the FLIGHTSTATUS column.

SELECT FLIGHTSTATUS, depend(*) AS FREQUENCY, depend(*) / (SELECT depend(*) FROM FLIGHT.FLIGHTS_DATA) AS FRACTION
FROM FLIGHT.FLIGHTS_DATA fdf
GROUP BY FLIGHTSTATUS

From the above, I see the categories are imbalanced. Now I’ll now not acquire to any extent further insights from all of the dataset, as it will leak data to the modeling section. 

Growing educate/take a look at walls of the dataset

Earlier than accumulating deeper insights into the knowledge, I’ll divide this dataset into educate and take a look at walls the use of Db2’s RANDOM_SAMPLING SP. I follow stratified sampling to maintain the ratio between two categories within the generated coaching information set.

Create a TRAIN partition.

name IDAX.RANDOM_SAMPLE('intable=FLIGHT.FLIGHTS_DATA, fraction=0.8, outtable=FLIGHT.FLIGHTS_TRAIN, by means of=FLIGHTSTATUS')

Replica the remainder information to a take a look at PARTITION.

CREATE TABLE FLIGHT.FLIGHTS_TEST AS (SELECT * FROM FLIGHT.FLIGHTS_DATA FDF WHERE FDF.ID NOT IN(SELECT FT.ID FROM FLIGHT.FLIGHTS_TRAIN FT)) WITH DATA

2. Discover information

On this step, I’ll take a look at each pattern information and the abstract statistics of the educational dataset to realize insights into the dataset.

Glance into some pattern information.

SELECT * FROM FLIGHT.FLIGHTS_TRAIN FETCH FIRST 10 ROWS ONLY

Some columns have encoded the time as numbers:

 — CRSDEPTIME: Pc Reservation Gadget (scheduled) Departure Time (hhmm)

 — DepTime: Departure Time (hhmm)

 — CRSArrTime: Pc Reservation Gadget (scheduled) Arrival Time

Now, I acquire abstract statistics from the FLIGHTS_TRAIN the use of SUMMARY1000 SP to get a world view of the traits of the dataset.

CALL IDAX.SUMMARY1000('intable=FLIGHT.FLIGHTS_TRAIN, outtable=FLIGHT.FLIGHTS_TRAIN_SUM1000')

Right here the intable has the title of the enter desk from which I need SUMMARY1000 SP to assemble statistics. outtable is the title of the desk the place SUMMARY1000 will retailer collected statistics for all of the dataset. But even so the outtable, SUMMARY1000 SP creates a couple of further output tables — one desk with statistics for each and every column kind. Our dataset has two kinds of columns — numeric and nominal. So, SUMMARY1000 will generate two further tables. Those further tables apply this naming conference: the title of the outtable + column kind. In our case, the column varieties are NUM, representing numeric, and CHAR, representing nominal. So, the names of those two further tables might be as follows:

FLIGHTS_TRAIN_SUM1000_NUM

FLIGHTS_TRAIN_SUM1000_CHAR

Having the statistics to be had in separate tables for explicit datatypes makes it more straightforward to view the statistics that follow to precise datatype and cut back the selection of columns whose statistics are considered in combination. This simplifies the research procedure. 

Test the abstract statistics of the numeric column.

SELECT * FROM FLIGHT.FLIGHTS_TRAIN_SUM1000_NUM

For the numeric columns, SUMMARY1000 acquire the next statistics:

  • Lacking price depend
  • Non-missing price depend
  • Moderate
  • Variance
  • Usual deviation
  • Skewness
  • Extra kurtosis
  • Minimal
  • Most

Each and every of those statistics can assist discover insights into the dataset. For example, I will see that DEPDEL15 and DEPDELAY columns have 49 lacking values. There are huge values in those columns: AIRTIME, CRSARRTIME, CRSDEPTIME, CRSELAPSEDTIME, DEPDELAY, DEPTIME, TAXIOUT, WHEELSOFF, and YEAR. Since I will be able to create a choice tree style, I don’t wish to take care of the huge price and the lacking values. Db2 will take care of each problems natively. 

Subsequent, I examine the abstract statistics of the nominal columns.

make a choice * from FLIGHT.FLIGHTS_TRAIN_SUM1000_CHAR

For nominal columns, SUMMARY1000 collected the next statistics:

  • Collection of lacking values
  • Collection of non-missing values
  • Collection of distinct values
  • Frequency of essentially the most widespread price

3. Preprocess information

From the above information exploration, I will see that the dataset has no lacking values. Those 4 TIME columns have huge values: AIRTIME, CRSARRTIME, DEPTIME, WHEELSOFF. I’ll depart the nominal values in all columns as-is, as the verdict tree implementation in Db2 can take care of them natively. 

Extract the hour section from the TIME columns — CRSARRTIME, DEPTIME, WHEELSOFF.

From taking a look up the outline of the dataset, I see the values within the CRSARRTIME, DEPTIME, and WHEELSOFF columns are encoding of hhmm of the time values. I extract the hour a part of those values to create, optimistically, higher options for the educational set of rules. 

Scale CRSARRTIME COLUMN: divide the price with 100 provides the hour of the flight arrival time:

UPDATE FLIGHT.FLIGHTS_TRAIN SET CRSARRTIME = CRSARRTIME / 100

Scale DEPTIME COLUMN: divide the price by means of 100 provides the hour of the flight arrival time:

UPDATE FLIGHT.FLIGHTS_TRAIN SET DEPTIME = DEPTIME / 100

Scale WHEELSOFF COLUMN: divide the price by means of 100 will give the hour of the flight arrival time:

UPDATE FLIGHT.FLIGHTS_TRAIN SET WHEELSOFF = WHEELSOFF / 100

4. Educate a choice tree style

Now the educational dataset is able for the verdict tree set of rules. 

I educate a choice tree style the use of GROW_DECTREE SP. 

CALL IDAX.GROW_DECTREE('style=FLIGHT.flight_dectree, intable=FLIGHT.FLIGHTS_TRAIN, identity=ID, goal=FLIGHTSTATUS')

I known as this SP the use of the next parameters:

  • style: the title I wish to give to the verdict tree style — FLIGHT_DECTREE
  • intable: the title of the desk the place the educational dataset is saved
  • identity: the title of the ID column
  • goal: the title of the objective column

After finishing the style coaching, the GROW_DECTREE SP generated a number of tables with metadata from the style and the educational dataset. Listed here are one of the key tables:

  • FLIGHT_DECTREE_MODEL: this desk incorporates metadata in regards to the style. Examples of metadata come with intensity of the tree, technique for dealing with lacking values, and the selection of leaf nodes within the tree. 
  • FLIGHT_DECTREE_NODES: this desk supplies details about each and every node within the determination tree. 
  • FLIGHT_DECTREE_COLUMNS: this desk supplies data on each and every enter column and their position within the educated style. The guidelines comprises the significance of a column in producing a prediction from the style. 

This hyperlink has all the listing of style tables and their main points. 

5. Generate predictions from the style

For the reason that FLIGHT_DECTREE style is educated and deployed within the database, I will use it for producing predictions at the take a look at information from the FLIGHTS_TEST desk.

First, I preprocess the take a look at dataset the use of the similar preprocessing good judgment that I implemented to the TRAINING dataset. 

Scale CRSARRTIME COLUMN: divide the price by means of 100 will give the hour of the flight arrival time:

UPDATE FLIGHT.FLIGHTS_TEST SET CRSARRTIME = CRSARRTIME / 100

Scale DEPTIME COLUMN: divide the price by means of 100 will give the hour of the flight arrival time:

UPDATE FLIGHT.FLIGHTS_TEST SET DEPTIME = DEPTIME / 100

Scale WHEELSOFF COLUMN: divide the price by means of 100 will give the hour of the flight arrival time:

UPDATE FLIGHT.FLIGHTS_TEST SET WHEELSOFF = WHEELSOFF / 100

Producing predictions

I take advantage of PREDICT_DECTREE SP to generate predictions from the FLIGHT_DECTREE style:

CALL IDAX.PREDICT_DECTREE('style=FLIGHT.flight_dectree, intable=FLIGHT.FLIGHTS_TEST, outtable=FLIGHT.FLIGHTS_TEST_PRED, prob=true, outtableprob=FLIGHT.FLIGHTS_TEST_PRED_DIST')

This is the listing of parameters I handed when calling this SP:

  • style: the title of the verdict tree style, FLIGHT_DECTREE
  • intable: title of the enter desk to generate predictions from
  • outtable: the title of the desk that the SP will create and retailer predictions to
  • prob: a boolean flag indicating if we wish to come with within the output the likelihood of each and every prediction
  • outputtableprob: the title of the output desk the place the likelihood of each and every prediction might be saved 

6. Assessment the style

The use of generated predictions for the take a look at dataset, I compute a couple of metrics to judge the standard of the style’s predictions.

Making a confusion matrix

I take advantage of CONFUSION_MATRIX SP to create a confusion matrix in accordance with the style’s prediction at the TEST dataset. 

CALL IDAX.CONFUSION_MATRIX('intable=FLIGHT.FLIGHTS_TEST, resulttable=FLIGHT.FLIGHTS_TEST_PRED, identity=ID, goal=FLIGHTSTATUS, matrixTable=FLIGHT.FLIGHTS_TEST_CMATRIX')

In calling this SP, listed here are one of the key parameters that I handed:

  • intable: the title of the desk that incorporates the dataset and the true price of the objective column
  • resulttable: the title of the desk that incorporates the column with predicted values from the style
  • goal: the title of the objective column
  • matrixTable: The output desk the place the SP will retailer the confusion matrix

After the SP completes its run, we now have the next output desk with statistics for the confusion matrix. 

FLIGHTS_TEST_CMATRIX:

This desk has 3 columns. The REAL column has the true flight standing. PREDICTION column has the expected flight standing. Since flight standing takes two values – 0 (on time) or 1 (behind schedule), we now have 4 imaginable mixtures between values within the REAL and the PREDICTION columns: 

  1. TRUE NEGATIVE: REAL: 0, PREDICTION: 0 — The style has as it should be predicted the standing of the ones flights that arrived on agenda. From that CNT column, we see that 11795 rows from the TEST desk belong to this mixture.
  2. FALSE POSITIVE: REAL: 0, PREDICTION: 1 — those are the flights that in fact arrived on time however the style predicted them to be behind schedule. 671 is the depend of such flights. 
  3. FALSE NEGATIVE: REAL: 1, PREDICTION: 0 — those flights have arrived past due, however the style predicted them to be on time. From the CNT desk, we discover their depend to be 2528.
  4. TRUE POSITIVE: REAL: 1, PREDICTION: 1 — the style has as it should be recognized those flights that have been past due. The depend is 4981. 

I take advantage of those counts to compute a couple of analysis metrics for the style. For doing so, I take advantage of CMATRIX_STATS SP as follows:

CALL IDAX.CMATRIX_STATS('matrixTable=FLIGHT.FLIGHTS_TEST_CMATRIX')

The one parameter this SP wishes is the title of the desk that incorporates the statistics generated by means of the CONFUSION_MATRIX SP within the earlier step. CMATRIX_STATS SP generates two units of output. The primary one presentations general high quality metrics of the style. The second comprises the style’s predictive efficiency for each and every magnificence. 

First output — general style metrics come with correction predictions, wrong prediction, general accuracy, weighted accuracy. From this output, I see that the style has an general accuracy of 83.98% and a weighted accuracy of 80.46%. 

With classification duties, it’s normally helpful to view the style’s high quality elements for each and every particular person magnificence. The second one output from the CMATRIX_STATS SP comprises those magnificence stage high quality metrics. 

For each and every magnificence, this output comprises the True Sure Fee (TPR), False Sure Fee (FPR), Sure Predictive Price (PPV) or Precision, and F-measure (F1 rating). 

Conclusions and key takeaways

If you wish to construct and deploy an ML style in a Db2 database the use of Db2’s integrated saved procedures, I am hoping you’ll in finding this instructional helpful. Listed here are the primary takeaways of this instructional:

Related articles

Sign up for us at PrestoCon Day, a loose digital group tournament

Sign up for us at PrestoCon Day, a loose digital group tournament

May 31, 2023
The latest malware vector is ChatGPT and AI: workforce for meta-security

Biden achieves ‘tentative’ settlement on US debt ceiling: Document

May 31, 2023
  • Demonstrated a whole workflow of constructing and the use of a choice tree style in a Db2 database the use of in-database ML Saved procedures. 
  • For each and every step within the workflow, I equipped concrete and useful SQL statements and saved procedures. For each and every code instance, when appropriate, I defined intuitively what it does, and its inputs and outputs. 
  • Integrated references to IBM Db2’s documentation for the ML saved procedures I used on this instructional. 

O’Reilly’s 2022 AI Adoption survey[3] underscored demanding situations in construction technical infrastructure and abilities hole as two most sensible bottlenecks to AI adoption within the venture. Db2 solves the primary one by means of supplying an end-to-end ML infrastructure within the database. It additionally lessens the latter, the abilities hole, by means of offering easy SQL API for growing and the use of ML fashions within the database. Within the venture, SQL is a extra commonplace talent than ML.

Take a look at the next sources to be told extra in regards to the ML options in IBM Db2 and notice further examples of ML use circumstances carried out with those options.

Discover Db2 ML Product Documentation

Discover Db2 ML samples in GitHub


References

  1. Paleyes, A., Urma, R.G. and Lawrence, N.D., 2022. Demanding situations in deploying gadget studying: a survey of case research. ACM Computing Surveys, 55(6), pp.1–29.
  2. Amershi, S., Begel, A., Chicken, C., DeLine, R., Gall, H., Kamar, E., Nagappan, N., Nushi, B. and Zimmermann, T., 2019, Would possibly. Instrument engineering for gadget studying: A case learn about. In 2019 IEEE/ACM forty first Global Convention on Instrument Engineering: Instrument Engineering in Observe (ICSE-SEIP) (pp. 291–300). IEEE.
  3. Loukides, Mike, AI Adoption within the Undertaking 2022. https://www.oreilly.com/radar/ai-adoption-in-the-enterprise-2022/



Source_link

Share76Tweet47

Related Posts

Sign up for us at PrestoCon Day, a loose digital group tournament

Sign up for us at PrestoCon Day, a loose digital group tournament

by reddnbre
May 31, 2023
0

The Presto Basis is happy to proportion its upcoming digital group convention PrestoCon Day, happening on 7 June 2023. Sign...

The latest malware vector is ChatGPT and AI: workforce for meta-security

Biden achieves ‘tentative’ settlement on US debt ceiling: Document

by reddnbre
May 31, 2023
0

The USA Area of Representatives and US Senate had been requested through US President Joe Biden to “move the settlement...

Web3 Gaming Market Construction: A Step-by-Step Information!

Web3 Gaming Market Construction: A Step-by-Step Information!

by reddnbre
May 30, 2023
0

In more than a few capacities, all of us suppose the function of customers in some shape or some other....

Worldcoin Emerges as Greatest Deployer of Secure Wallets on Polygon Blockchain, Onboards 1.2 Million Self-Custodial Secure Good Accounts

Worldcoin Emerges as Greatest Deployer of Secure Wallets on Polygon Blockchain, Onboards 1.2 Million Self-Custodial Secure Good Accounts

by reddnbre
May 30, 2023
0

Worldcoin, the cryptocurrency challenge led by way of Gear for Humanity, has completed vital milestones in its challenge to pressure...

10 Benefits of The usage of Sensible Contracts

10 Benefits of The usage of Sensible Contracts

by reddnbre
May 30, 2023
0

Contracts are an integral facet of the pro and private lives of a large number of other folks international. Nearly...

Load More
  • Trending
  • Comments
  • Latest
How you can Host a Storj Node – Setup, Profits & Stories

How you can Host a Storj Node – Setup, Profits & Stories

June 3, 2022
Ecu alternate Bitvavo hyperlinks with Mercury Redstone to permit simple get right of entry to to crypto indices » CryptoNinjas

Ecu alternate Bitvavo hyperlinks with Mercury Redstone to permit simple get right of entry to to crypto indices » CryptoNinjas

June 2, 2022
What is the Easiest Blockchain IoT Ability Trail For Me?

What is the Easiest Blockchain IoT Ability Trail For Me?

June 5, 2022
Ethereum Mining in 2021

Ethereum Mining in 2021

May 9, 2022
Immortalize Your Devoted Animal Partners with the Petaverse

Immortalize Your Devoted Animal Partners with the Petaverse

0
April – Paintings Growth File

April – Paintings Growth File

0

2021’s Virtual Asset Shuffle: A Myriad of Crypto Marketplace Cap Positions Moved Chaotically This 12 months

0
Luna Basis Acquires An Further 37.8k Bitcoin Value $1.5B, Bringing its Overall Holdings to 80,394 BTC

Luna Basis Acquires An Further 37.8k Bitcoin Value $1.5B, Bringing its Overall Holdings to 80,394 BTC

0
Devcon5 Time table, Sponsors and Audio system are on-line NOW!

Devcon5 Time table, Sponsors and Audio system are on-line NOW!

May 31, 2023
Xenomorph Sculpture from ‘Alien’ Is going Virtual in 500 NFTs

Xenomorph Sculpture from ‘Alien’ Is going Virtual in 500 NFTs

May 31, 2023
Bumper’s Leap forward DeFi Protocol Set To Reshape Choices Pricing

Bumper’s Leap forward DeFi Protocol Set To Reshape Choices Pricing

May 31, 2023
Sign up for us at PrestoCon Day, a loose digital group tournament

Sign up for us at PrestoCon Day, a loose digital group tournament

May 31, 2023

CryptVolt News

Welcome to cryptvoltnews The goal of cryptvoltnews is to give you the absolute best news sources for any topic! Our topics are carefully curated and constantly updated as we know the web moves fast so we try to as well.

Categories tes

  • Bitcoin
  • Blockchain
  • Cryptocurrency
  • Ethereum
  • Litecoin
  • Mining
  • NFT

Recent Posts

  • Devcon5 Time table, Sponsors and Audio system are on-line NOW!
  • Xenomorph Sculpture from ‘Alien’ Is going Virtual in 500 NFTs

Recent Comments

    • Home
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms & Conditions

    © 2017 JNews - Crafted with love by Jegtheme.

    No Result
    View All Result
    • Home
    • Cryptocurrency
    • Bitcoin
    • Mining
    • Ethereum
    • Litecoin
    • NFT
    • Blockchain
    • Contact Us

    © 2018 JNews by Jegtheme.

    What Are Cookies
    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
    CookieDurationDescription
    cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
    cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
    cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
    cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
    cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
    viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
    Functional
    Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
    Performance
    Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
    Analytics
    Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
    Advertisement
    Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
    Others
    Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
    SAVE & ACCEPT