All Topics

Top

All Topics

Hi Splunkers, Long time ago we setup a SH cluster, and added search peers using CLI Some time later we changed the setup and began setting the search peers via an App pushed from the deployer. All ... See more...
Hi Splunkers, Long time ago we setup a SH cluster, and added search peers using CLI Some time later we changed the setup and began setting the search peers via an App pushed from the deployer. All $SPLUNK/etc/system/local distsearch.conf files were purged, and the app contains some peers on distsearch.conf and clustered indexers on server.conf Recently we found one cluster member stubbornly kept re-creating a distsearch.conf on system/local that overrided the cluster's configs ( pushed via the App ). Removing the file and doing a rolling restart, the file showed up again. Cleaning the KV Store and adding that member to the cluster restored the rogue distsearch.conf again. We also deleted *bundle files on $SPLUNK/var/run. Following the steps described as "Add a member that was both removed and disabled" here: https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Addaclustermember We found when calling "splunk add shcluster-member" on the Captain, the "new" member's system/local path was purged from some files and re-created .. with the rogue distsearch.conf we were trying to get rid. And this event showed up on _internal ERROR ApplicationUpdater - Error reloading : handler for distsearch (access_endpoints /search/distributed/bundle-replication-files, /search/distributed/peers) Any ideas ? The final solution was to nuke the VM and re-create from scratch.
i have a below data generated by a timechart  i'm trying to write a query where if there are continous sequence of number 1 in a span of 15 mins  it should alert me  index=abc  "Heartbeat*"  |timech... See more...
i have a below data generated by a timechart  i'm trying to write a query where if there are continous sequence of number 1 in a span of 15 mins  it should alert me  index=abc  "Heartbeat*"  |timechart span=2m count  2021-10-04 10:20:00 1 2021-10-04 10:22:00 1 2021-10-04 10:24:00 1 2021-10-04 10:26:00 1 2021-10-04 10:28:00 1   when the  
Hi All , Can some one help me understand why  similar query gives me 2 different results for a intrusion detection datamodel . Only difference bw 2 is the order . Query 1: | tstats summariesonly=t... See more...
Hi All , Can some one help me understand why  similar query gives me 2 different results for a intrusion detection datamodel . Only difference bw 2 is the order . Query 1: | tstats summariesonly=true values(IDS_Attacks.dest) as dest values(IDS_Attacks.dest_port) as port from datamodel=Intrusion_Detection where IDS_Attacks.ICS_Asset=Yes IDS_Attacks.url=* by IDS_Attacks.src,IDS_Attacks.transport,IDS_Attacks.url | eval source="wildfire" | rename IDS_Attacks.url as wildfireurl | rex field=wildfireurl "(?P<domain>[^\/]*)" | lookup rf_url_risklist.csv Name as wildfireurl OUTPUT EvidenceDetails Risk | lookup idefense_http_intel ip as wildfireurl OUTPUT description weight | lookup rf_domain_risklist.csv Name as domain OUTPUT RiskString | search EvidenceDetails =* OR description=* OR RiskString=* | rename "IDS_Attacks.*" as * | append    [| tstats summariesonly=true values(All_Traffic.dest) as dest values(All_Traffic.app) as app values(All_Traffic.http_category) as http_category from datamodel=Network_Traffic where All_Traffic.ICS_Asset=Yes All_Traffic.action=allowed app!="insufficient-data" app!="incomplete"        [| inputlookup ids_malicious_ip_tracker        | fields src        | rename src AS All_Traffic.src ] by All_Traffic.src,All_Traffic.transport,All_Traffic.dest_port    | lookup n3-a_cidr_wlist.csv cidr_range as All_Traffic.src OUTPUT cidr_range as src_match    | where src_match="NONE" OR isnull(src_match)    | fields - src_match    | lookup rf_ip_risklist.csv Name as All_Traffic.src OUTPUT EvidenceDetails Risk    | lookup idefense_ip_intel ip as All_Traffic.src OUTPUT description weight    | lookup ips_tor.csv ip as All_Traffic.src output app as app    | search EvidenceDetails =* OR description=* OR app="tor"    | where Risk > 69 OR weight > 40 OR app="tor"    | eval rf_evidence_details_0 = mvappend(EvidenceDetails, description)    | fields - description, EvidenceDetails    | eval Attack_Count = mvcount(rf_evidence_details_0)    | search NOT All_Traffic.src=10.0.0.0/8 OR All_Traffic.src=192.168.0.0/16 OR All_Traffic.src=172.16.0.0/12    | rename "All_Traffic.*" as *] | append    [| tstats summariesonly=true count values(All_Traffic.action) values(All_Traffic.http_category) as http_category from datamodel=Network_Traffic where All_Traffic.ICS_Asset=Yes by All_Traffic.src, All_Traffic.transport, All_Traffic.dest_port, All_Traffic.dest    | lookup rf_ip_risklist.csv Name as All_Traffic.dest OUTPUT EvidenceDetails Risk    | lookup idefense_ip_intel ip as All_Traffic.dest OUTPUT description weight    | search EvidenceDetails =* OR description=*    | where Risk > 69 OR weight > 40    | eval rf_evidence_details_0 = mvappend(EvidenceDetails, description)    | fields - description, EvidenceDetails    | eval Attack_Count = mvcount(rf_evidence_details_0)    | search count > 100    | search All_Traffic.src=10.0.0.0/8 OR All_Traffic.src=192.168.0.0/16 OR All_Traffic.src=172.16.0.0/12    | rename "All_Traffic.*" as *] Second Query : | tstats summariesonly=true count values(All_Traffic.action) values(All_Traffic.http_category) as http_category from datamodel=Network_Traffic where All_Traffic.ICS_Asset=Yes by All_Traffic.src, All_Traffic.transport, All_Traffic.dest_port, All_Traffic.dest | lookup rf_ip_risklist.csv Name as All_Traffic.dest OUTPUT EvidenceDetails Risk | lookup idefense_ip_intel ip as All_Traffic.dest OUTPUT description weight | search EvidenceDetails =* OR description=* | where Risk > 69 OR weight > 40 | eval rf_evidence_details_0 = mvappend(EvidenceDetails, description) | fields - description, EvidenceDetails | eval Attack_Count = mvcount(rf_evidence_details_0) | search count > 100 | search All_Traffic.src=10.0.0.0/8 OR All_Traffic.src=192.168.0.0/16 OR All_Traffic.src=172.16.0.0/12 | rename "All_Traffic.*" as * | append    [| tstats summariesonly=true values(All_Traffic.dest) as dest values(All_Traffic.app) as app values(All_Traffic.http_category) as http_category from datamodel=Network_Traffic where All_Traffic.ICS_Asset=Yes All_Traffic.action=allowed app!="insufficient-data" app!="incomplete"        [| inputlookup ids_malicious_ip_tracker        | fields src        | rename src AS All_Traffic.src ] by All_Traffic.src,All_Traffic.transport,All_Traffic.dest_port    | lookup n3-a_cidr_wlist.csv cidr_range as All_Traffic.src OUTPUT cidr_range as src_match    | where src_match="NONE" OR isnull(src_match)    | fields - src_match    | lookup rf_ip_risklist.csv Name as All_Traffic.src OUTPUT EvidenceDetails Risk    | lookup idefense_ip_intel ip as All_Traffic.src OUTPUT description weight    | lookup ips_tor.csv ip as All_Traffic.src output app as app    | search EvidenceDetails =* OR description=* OR app="tor"    | where Risk > 69 OR weight > 40 OR app="tor"    | eval rf_evidence_details_0 = mvappend(EvidenceDetails, description)    | fields - description, EvidenceDetails    | eval Attack_Count = mvcount(rf_evidence_details_0)    | search NOT All_Traffic.src=10.0.0.0/8 OR All_Traffic.src=192.168.0.0/16 OR All_Traffic.src=172.16.0.0/12    | rename "All_Traffic.*" as *] | append    [| tstats summariesonly=true values(IDS_Attacks.dest) as dest values(IDS_Attacks.dest_port) as port from datamodel=Intrusion_Detection where IDS_Attacks.ICS_Asset=Yes IDS_Attacks.url=* by IDS_Attacks.src,IDS_Attacks.transport,IDS_Attacks.url    | eval source="wildfire"    | rename IDS_Attacks.url as wildfireurl    | rex field=wildfireurl "(?P<domain>[^\/]*)"    | lookup rf_url_risklist.csv Name as wildfireurl OUTPUT EvidenceDetails Risk    | lookup idefense_http_intel ip as wildfireurl OUTPUT description weight    | lookup rf_domain_risklist.csv Name as domain OUTPUT RiskString    | search EvidenceDetails =* OR description=* OR RiskString=*    | rename "IDS_Attacks.*" as *] (edited)   above query has 3 parts suspicious inbound ,suspicious outbound and suspicious url . While executing the first query i am getting 1 result and while executing 2 query i am getting 6 results for the same time frame
Hi, I am trying to create an alert for hosts that are communicating to the internet. Want to know the destinations.  But have a lot of trusted destinations to exclude (Aprox. 2800).  The below quer... See more...
Hi, I am trying to create an alert for hosts that are communicating to the internet. Want to know the destinations.  But have a lot of trusted destinations to exclude (Aprox. 2800).  The below query gives me all the trusted destinations+other.  index=net sourcetype=proxy dest!=10.1.* 10.2.* sc_status=200 c_ip IN (10.1.10.* 10.1.11.* ) | lookup dnslookup clientip as c_ip OUTPUT clienthost as DNSName | stats count by c_ip DNSName dest sc_status Please let me know of any solution.   
Hello everyone, I want to forward all data from index/sourcetype to third system. I did outputs.conf [tcpout:fastlane] server = ***:1468 sendCookedData = false [syslog] defaultGroup=syslogGrou... See more...
Hello everyone, I want to forward all data from index/sourcetype to third system. I did outputs.conf [tcpout:fastlane] server = ***:1468 sendCookedData = false [syslog] defaultGroup=syslogGroup [syslog:syslogGroup] server = ***:514   but it send just metrics from internal index how can I fix it? thank you
I am a Splunk user and was trying to fix a splunk query In the SPL the user is using a key-value filter to get the resulting events. index="as400" sourcetype="pac:avenger:tss:syslog" (status="failu... See more...
I am a Splunk user and was trying to fix a splunk query In the SPL the user is using a key-value filter to get the resulting events. index="as400" sourcetype="pac:avenger:tss:syslog" (status="failure" OR action="failure") And I see many matching events. When I rewrite the query, like the following I was hoping to at least see some or all matching events, But instead I get "No results found" index="as400" sourcetype="pac:avenger:tss:syslog"  "failure" When I looked into the raw events by just querying, index="as400" sourcetype="pac:avenger:tss:syslog"  I found no key-word in the raw event containing the string . How is that possible that the word "failure" is not part of the raw event ?  and yet I see results when I search using key-value pair in the SPL but nor result when only searching by 'failure'. It baffles me !
Hi How can I extract first occured this "User ABC123 invalid"  with rex? Here is the log: 2021-10-03 13:26:44,441 ERROR [APP] User ABC123 invalid: javax.security.auth.login.LoginException: User AB... See more...
Hi How can I extract first occured this "User ABC123 invalid"  with rex? Here is the log: 2021-10-03 13:26:44,441 ERROR [APP] User ABC123 invalid: javax.security.auth.login.LoginException: User ABC123 invalid   Thanks,
Hi, I have a field (Lastsynctime) which outputs time in below format 2021-10-02 09:06:18.173 I want to change the time format like  "%d/%m/%Y %H:%M:%S" I tried with strf command, which is not ... See more...
Hi, I have a field (Lastsynctime) which outputs time in below format 2021-10-02 09:06:18.173 I want to change the time format like  "%d/%m/%Y %H:%M:%S" I tried with strf command, which is not working | eval SyncTime=strftime(Lastsynctime,"%d/%m/%Y %H:%M:%S")
I tried to add a simple JS to a dashboard but nothing i tried works. I have Splunk 8.2.1 single instance on Windows. The script is in C:\Program Files\Splunk\etc\apps\<appname>\appserver\static i ... See more...
I tried to add a simple JS to a dashboard but nothing i tried works. I have Splunk 8.2.1 single instance on Windows. The script is in C:\Program Files\Splunk\etc\apps\<appname>\appserver\static i restartet the splunk service several times and clicked the bump button several times. JS is active in my browser and the cache got cleared several times too. When i go to http://127.0.0.1:8000/de-DE/app/<appname>/<dashboardname>/<scriptname> i just get {} My Dashboard: <form script="btnclick.js"> <label>JS Test</label> <init> </init> <fieldset submitButton="false"> <input type="text" token="field1"> <label>field1</label> <default>$randInt$</default> </input> </fieldset> <row> <panel> <html> <button id="rand" class="btn btn-primary">Random</button> <p> $randInt$ </p> </html> </panel> </row> </form>    My JS: require( [ "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!" ], function(mvc) { var tokens = mvc.Components.get("default"); console.log("Test"); $('#rand').on("click",function(){ tokens.set("randInt", 2); }); } );   Why does it not work?
Hi all, I have a TA deployed using the deployment server. The config files are deployed to different folders in bin and local. /opt/splunk/etc/apps/TA/bin/folder/*.conf /opt/splunk/etc/apps/TA/loc... See more...
Hi all, I have a TA deployed using the deployment server. The config files are deployed to different folders in bin and local. /opt/splunk/etc/apps/TA/bin/folder/*.conf /opt/splunk/etc/apps/TA/local/inputs.conf /opt/splunk/etc/apps/TA/bin/*.sh Where can I add files now locally within the TA that remain untouched and will not be overwritten by the Deployment server? Best , N.
I have a alert that should be sent out every 8 am and 4pm everyday even if there is no results. I can only see the email get sent out sometimes even if there is less results then the trigger. I've se... See more...
I have a alert that should be sent out every 8 am and 4pm everyday even if there is no results. I can only see the email get sent out sometimes even if there is less results then the trigger. I've sent it out to several diffrent email addresses to make sure there is no email filter that stops it. Splunk version is 8.0.4.1 we are going to upgrade soon however i would need the alert set up before then. This is my configures: Title: Error Report Alert type: Scheduled - Run on Cron Schedule Time Range: Last 24 Hours Cron Expression: 00 8,16 * * * Expires: 60 min Trigger alert When: Number of Results is less than 999999 Trigger: Once When Triggered: Email Include: Inline Table & Attach CSV When i check in the python.log it says that: sendemail:139 was done for all of the alerts. Does anyone know what there can be that is stopping it from working?
Hi there. There is one thing that's not obvious for me. I understand that if I create a non-accelerated datamodel, the searches from datamodel are converted on the fly to searches on the underlying... See more...
Hi there. There is one thing that's not obvious for me. I understand that if I create a non-accelerated datamodel, the searches from datamodel are converted on the fly to searches on the underlying data and therefore the permissions for the user performing the search should be applied correctly, right? But how about accelerated ones? There are summaries created by "system" user. Does splunk check permission to datamodel summaries the same way as it does to raw indexes? Let's assume I have your typical CIM datamodel Network Sessions. I have a macro cm_Network_Sessions_Indexes defined as "index=internal_juniper OR index=external_cisco". So the CIM Network Sessions datamodel is being created upon events held in two separate indexes (let's say I have two different teams maintaining those two device classes). Now if this datamodel was not accelerated, I assume that my juniper admin preforming a search on it would get only sessions from internal_juniper index and vice versa - cisco admin would get only session from ciscos. But what happens when I turn on the acceleration? Will it still work this way? Or will all admins get to see all sessions because they are retrieved from the accelerated summaries, not the underlying indexes?
Hey, We have some 1500 servers where splunk forwarders installed. we need the path to find location of data or logs coming from these servers. Is there a simple way to do that?
Hi , can some one help me with the rex command to extract the string included in first [] from below pattern. For example: string to filter from below pattern is Proxy - Zscaler, Teams, Exchange and... See more...
Hi , can some one help me with the rex command to extract the string included in first [] from below pattern. For example: string to filter from below pattern is Proxy - Zscaler, Teams, Exchange and extract it under field "CI_Name" [Proxy - Zscaler] [xxxxxxxr] USA US-22 Peer 2 [Teams] [xxxxxxx] Mexico Login - MX (proxy) [Exchange] [xxxxx] Mexico Outlook Send Email - MX (proxy) Thanks
Hi, Below is my search ,  index=aa sourcetype=bb|stats sum(CountOf_True) as True sum(CountOf_false) as false|table True  False |eval comp="Test1" |append [|search index=cc sourcetype=dd|eval comp=... See more...
Hi, Below is my search ,  index=aa sourcetype=bb|stats sum(CountOf_True) as True sum(CountOf_false) as false|table True  False |eval comp="Test1" |append [|search index=cc sourcetype=dd|eval comp="Test2"] |eventstats count as total_count by comp |stats count(eval(Status=="True")) as True count(eval(Status=="False")) as False count(eval(Status=="Error")) as "Error" count(eval(Status=="Excluded")) as "Excluded" max(total_count) as total by comp |eval "True %"=round((('True'+'Excluded')/total*100),2) |eval "False %"=round((('False'+'Error')/total*100),2) | sort sort_field |fields - sort_field |table Comp "True %" "False %" The result which is get is , Comp            True %      False % Test1              0                 0 Test2             93.00        7.00   I have to get the actual % for Test1 too .  Iam getting "0 " .Not sure my append is wrong with stats Sum() . Please can any one give me right way to get the values for the above search .  
Hi, We use splunk Db connect to pull the DB logs. What will be the impact if we poll the DB every minute from splunk? Is there a way to find the impact?  
Hi Everyone, I created a custom Splunk app, and when using the (un-modified) search dashboard within the app to produce a table, it is text wrapping most of the fields. I.e. single row fields are a... See more...
Hi Everyone, I created a custom Splunk app, and when using the (un-modified) search dashboard within the app to produce a table, it is text wrapping most of the fields. I.e. single row fields are appearing on multiple lines.  When I run the exact same search (literally copy and pasted) in the "Search and Reporting" app, the fields are not word wrapped and show on single lines as expected.  Is there a setting somewhere that I need to enable/disable in my app settings so that when running a search to produce a table in the Search dashboard, it expands the field width, like it does in the Search and Reporting app? Screenshots Below. I basically want my search results in the custom app (first pic), to look like the search results in the Search and Reporting app (second pic), hopefully via a setting or something. Custom App Search   Search and Reporting Search   Thanks in advance.
How can I delay the trigger of the email alert to lets say 5 minutes? Ex. The alert detected the response_code=500, but I would like the email alert to trigger on the 5th minute if the response_cod... See more...
How can I delay the trigger of the email alert to lets say 5 minutes? Ex. The alert detected the response_code=500, but I would like the email alert to trigger on the 5th minute if the response_code is still the same (500). Is it possible? Thanks!
Hi, I am receiving DB connect logs into splunk, but user wants logs to be in MKV format. Is there a setting to parse the logs in MKV format? In DBX box we had option to output the logs in MKV form... See more...
Hi, I am receiving DB connect logs into splunk, but user wants logs to be in MKV format. Is there a setting to parse the logs in MKV format? In DBX box we had option to output the logs in MKV format, I am not finding similar setting in DBconnect app.
Hi Team  I am trying to extract few report from user agent like below  OS details  OS version Browser Browser Version Operating System Operating System Version Mobile device    Curren... See more...
Hi Team  I am trying to extract few report from user agent like below  OS details  OS version Browser Browser Version Operating System Operating System Version Mobile device    Currently I am using Eval (  IF & Case ) to generate report however its very manual process and more time consuming. Please find below command for example  Can any one help me how do i use lookup command ?   Sample IF & Case  If - val Device =if(match(cs_user_agent, "iPhone"),"iPhone",if(match(cs_user_agent, "Macintosh"),"iPhone",if(match(cs_user_agent, "iPad"),"iPhone",if(match(cs_user_agent, "Android"),"Android",if(match(cs_user_agent, "Win64"),"Windows",if(match(cs_user_agent, "14092"),"Windows",if(match(cs_user_agent, "Windows"),"Windows",if(match(cs_user_agent,"SM-"),"Android",if(match(cs_user_agent,"CPH"),"Android",if(match(cs_user_agent,"Nokia"),"Android",if(match(cs_user_agent,"Pixel"),"Android",if(match(cs_user_agent,"TB-"),"Android",if(match(cs_user_agent,"VFD"),"Android",if(match(cs_user_agent,"HP%20Pro%20Slate"),"Android",if(match(cs_user_agent,"VOG-L09"),"Android",if(match(cs_user_agent,"YAL-L21"),"Android",if(match(cs_user_agent,"ATU-L22"),"Android",if(match(cs_user_agent,"MAR-LX1A"),"Android",if(match(cs_user_agent,"RNE-L22"),"Android",if(match(cs_user_agent,"INE-LX2"),"Android",if(match(cs_user_agent,"AMN-LX2"),"Android",if(match(cs_user_agent,"LYO-LO2"),"Android",if(match(cs_user_agent,"DRA-LX9"),"Android",if(match(cs_user_agent,"LYA-L29"),"Android",if(match(cs_user_agent,"ANE-LX2J"),"Android",if(match(cs_user_agent,"STK-L22"),"Android",if(match(cs_user_agent,"EML-AL00"),"Android",if(match(cs_user_agent,"BLA-L29"),"Android",if(match(cs_user_agent,"X11"),"Linux",if(match(cs_user_agent,"LDN-LX2"),"Android",if(match(cs_user_agent,"TB3-"),"Android",if(match(cs_user_agent,"5033T"),"Android",if(match(cs_user_agent,"5028D"),"Android",if(match(cs_user_agent,"5002X"),"Android",if(match(cs_user_agent,"COR-"),"Android",if(match(cs_user_agent,"MI%20MAX"),"Android",if(match(cs_user_agent,"WAS-LX2"),"Android",if(match(cs_user_agent,"vivo"),"Android",if(match(cs_user_agent,"EML-L29"),"Android",if(match(cs_user_agent,"Moto"),"Android",if(match(cs_user_agent,"MMB"),"Android",if(match(cs_user_agent,"Redmi%20Note%208"),"Android",if(match(cs_user_agent,"M2003J15SC"),"Android",if(match(cs_user_agent,"MI%20MAX"),"Android",if(match(cs_user_agent,"Nexus"),"Android",if(match(cs_user_agent,"ELE-L29"),"Android",if(match(cs_user_agent,"Redmi%20Note%204"),"Android",if(match(cs_user_agent,"rv:89.0"),"Android",if(match(cs_user_agent,"VKY-L09"),"Android",if(match(cs_user_agent,"SmartN11"),"Android",if(match(cs_user_agent,"A330"),"Android",if(match(cs_user_agent,"LM-"),"Android",if(match(cs_user_agent,"G8341"),"Android",if(match(cs_user_agent,"INE-AL00"),"Android",if(match(cs_user_agent,"Mi"),"Android",if(match(cs_user_agent,"CLT"),"Android",if(match(cs_user_agent,"Android"),"Android",if(match(cs_user_agent,"BV9700Pro"),"Android",if(match(cs_user_agent,"5024I"),"Android",if(match(cs_user_agent,"MEIZU"),"Android",if(match(cs_user_agent,"Linux%20X86_64"),"Linux","OTHER"))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) Case - val Brand= case(match(cs_user_agent, "CPH"),"Oppo",match(cs_user_agent, "SM-"),"Samsung",match(cs_user_agent, "VFD"),"Vodafone",match(cs_user_agent, "VFD"),"Vodafone",match(cs_user_agent, "VOG"),"Huawei",match(cs_user_agent, "ELE"),"Huawei",match(cs_user_agent, "CLT"),"Huawei",match(cs_user_agent, "EML"),"Huawei",match(cs_user_agent, "LYA"),"Huawei",match(cs_user_agent, "EVR"),"Huawei",match(cs_user_agent, "BLA"),"Huawei",match(cs_user_agent, "DRA"),"Huawei",match(cs_user_agent, "LDN"),"Huawei",match(cs_user_agent, "YAL-L21"),"Huawei",match(cs_user_agent, "ATU-L22"),"Huawei",match(cs_user_agent, "MAR-LX1A"),"Huawei",match(cs_user_agent, "X11"),"Linux",match(cs_user_agent, "INE-LX2"),"Huawei",match(cs_user_agent, "AMN-"),"Huawei",match(cs_user_agent, "RNE-L22"),"Honor",match(cs_user_agent, "LYO"),"Huawei",match(cs_user_agent, "ANE"),"Huawei",match(cs_user_agent, "STK"),"Huawei",match(cs_user_agent, "BLA"),"Huawei",match(cs_user_agent, "TB3-"),"Lenovo",match(cs_user_agent, "5033T"),"Alcatel",match(cs_user_agent, "5028D"),"Alcatel",match(cs_user_agent, "5002X"),"Alcatel",match(cs_user_agent, "iPhone"),"iPhone",match(cs_user_agent, "20Win64"),"Desktop",1=1,"other")