All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Does Splunk Enterprise support URI Templates to extend the REST API? I'd like to be able to expose an api such as /resource/{name}/dosomething  
sparkline requires the _time field to work, so in your case, you have two stats commands, so the _time field is lost after your first stats. You can do this index=_internal source="/opt/splunk/var/... See more...
sparkline requires the _time field to work, so in your case, you have two stats commands, so the _time field is lost after your first stats. You can do this index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | stats sparkline(sum(b)) as trend sum(b) as Usage by idx | eval trend=mvmap(trend, if(isnum(trend), round(trend/1024/1024/1024,2), trend)) | eval Usage=round(Usage/1024/1024/1024,2) | rename idx AS index | sort Usage This first stats will make the sparkline in sum of bytes.  The mvmap() command is a trick to convert the sparkline values into GiB figures like you are doing with Usage. A sparkline is simply a special form of a multivalue field where the first element is the value ##__SPARKLINE__## so this just iterates through the values, ignoring the first and rounds each of the values, so the sparkline also shows in GiB.  
If you forwarder is not forwarding the complete file, there might be a problem with linebreaker.  This has nothing to do with how to search.  Getting Data In is a better forum.
Assuming you have lookup file asn_user.csv and you set up a lookup called asn_user.csv (my preference is to not use .csv in lookup name, but many others do not make the distinction), you can do inde... See more...
Assuming you have lookup file asn_user.csv and you set up a lookup called asn_user.csv (my preference is to not use .csv in lookup name, but many others do not make the distinction), you can do index="okta" actor.alternateId=*@* authenticationContext.externalSessionId!="unknown" | eval "ASN"='securityContext.asNumber' | eval "Session ID"='authenticationContext.externalSessionId' | eval "User"='actor.alternateId' | eval "Risk"='debugContext.debugData.risk' | stats dc("user_agent") as "Agent Count" values(user_agent) AS "User Agent" dc(ASN) as "ASN Count" values(ASN) as ASN dc(Risk) as "Risk Count" values(Risk) as Risk by User "Session ID" | table "Session ID", ASN, "ASN Count", "User Agent", "Agent Count", User, Risk | lookup asn_user.csv User output ASN as ASNfound | where "ASN Count" > 1 AND "Agent Count" > 1 AND NOT mvfind(ASNfound, ASN)  
I refactored your original search slightly using rename rather than eval, as there's no need to duplicate the fields. I also removed the dc() from the stats as it's not necessary - it can be done la... See more...
I refactored your original search slightly using rename rather than eval, as there's no need to duplicate the fields. I also removed the dc() from the stats as it's not necessary - it can be done later because you are collecting values and is probably more optimal. index="okta" actor.alternateId=*@* authenticationContext.externalSessionId!="unknown" | rename "securityContext.asNumber" as ASN, "authenticationContext.externalSessionId" as "Session ID", "actor.alternateId" as User "debugContext.debugData.risk" as Risk | stats values(user_agent) AS "User Agent" values(ASN) as ASN values(Risk) as Risk by User "Session ID" | lookup asn_user.csv ASN User OUTPUT ASN as found_ASNs ``` Count the ASN's and Agents here and count the number of ASNs found in the lookup and set a new field if there is a new ASN found not seen before ``` | eval "ASN Count"=mvcount(ASN), "Agent Count"=mvcount('User Agent'), hasNewASN=if('ASN Count'>mvcount(found_ASNs), 1, 0) ``` Do the where before the table ``` | where 'ASN Count' > 1 AND 'Agent Count' > 1 AND hasNewASN=1 | table "Session ID", ASN, "ASN Count", "User Agent", "Agent Count", User, Risk The lookup will lookup ALL the ASNs collected and return all those that are found, so to see if there is a new one, you can just compare the counts. Note that if there are more than 100 ASNs then a CSV will only return 100 - you can make a definition that will allow up to 1000, but if that is going to be an issue, then you will need to put the lookup BEFORE the stats and test for the found ASN and set a flag accordingly, e.g. ... | lookup asn_user.csv ASN User OUTPUT ASN as found_ASN | eval newASN=if(isnull(found_AS),1,0) | stats sum(newASN) as newASNs values(user_agent) AS "User Agent" values(ASN) as ASN values(Risk) as Risk by User "Session ID" ... so you can then count the new ASNs for that user, but if you can do it after the stats, it will be more performant. Note field quoting rules. Field names need to be DOUBLE quoted when containing spaces or other odd characters when on left hand side of eval Field names need to be SINGLE quoted when containing spaces or other odd characters when on right hand side of eval Field names need to beDOUBLE quoted when containing spaces or other odd characters when in a stats aggregation    
Created the inputs in the local folder of the apps to use UDP on the heavy forwarder, set the sourcetype to f5:bigip:syslog, but the app is not splitting the data into multiple sourcetypes like it sa... See more...
Created the inputs in the local folder of the apps to use UDP on the heavy forwarder, set the sourcetype to f5:bigip:syslog, but the app is not splitting the data into multiple sourcetypes like it says it is supposed to. Can someone let me know what can be done to get this to work.
Hi @yuanliu  That's work, Thanks
I wrote this query to help look for multiple Autonomous System Number (ASN) values and multiple user agent values in a user’s Okta session as this is an indication of a session hijack. I have cr... See more...
I wrote this query to help look for multiple Autonomous System Number (ASN) values and multiple user agent values in a user’s Okta session as this is an indication of a session hijack. I have created this search which works as needed   index="okta" actor.alternateId=*@* authenticationContext.externalSessionId!="unknown" | eval "ASN"='securityContext.asNumber' | eval "Session ID"='authenticationContext.externalSessionId' | eval "User"='actor.alternateId' | eval "Risk"='debugContext.debugData.risk' | stats dc("user_agent") as "Agent Count" values(user_agent) AS "User Agent" dc(ASN) as "ASN Count" values(ASN) as ASN dc(Risk) as "Risk Count" values(Risk) as Risk by User "Session ID" | table "Session ID", ASN, "ASN Count", "User Agent", "Agent Count", User, Risk | search "ASN Count" > 1 AND "Agent Count" > 1   Session ID ASN ASN Count User Agent Agent Count User Risk idxxxxxxxxxxxx 12345 321 2 UserAgent1 UserAgent2 2 user@company.com {reasons=Anomalous Device, level=MEDIUM} idxxxxxxxxxxxx 6789 321 2 UserAgent1 UserAgent2 2 user@company.com {reasons=Anomalous Device, level=MEDIUM} The issue is I am not getting only anomalous activity as expected, but many false positives as most session IDs legitimately have more than one ASN attached to the session. My thought was to create a lookup (asn_user.csv) that will eventually be updated through a scheduled search (at a slower rate that the main search is ran) to append new data to gather the User and ASNs that have had a successful transaction with using this search:   index="okta" actor.alternateId=*@* authenticationContext.externalSessionId!="unknown" | eval "ASN"='securityContext.asNumber' | eval "User"='actor.alternateId'| table ASN User | dedup ASN User   ASN User 12345 user@company.com 321 user@company.com   My issue right now is trying to use the lookup against the main search. The goal is IF the ASN is new to the user from the main okta search (meaning the ASN is not seen in the lookup file, asn_user.csv) then return the | table  "Session ID", ASN, "ASN Count", "User Agent", "Agent Count", User, Risk  | search "ASN Count" > 1 AND "Agent Count" > 1 results with the anomalous ASN while still meeting the "ASN Count" > 1 AND "Agent Count" > 1 requirement  Does anyone have some ideas to accomplish this?
Yes i have already read this but cannot seem to get it to work. Can you please show me what exactly needs to be added to my search to get it to work?   Below is what i have, but it does not show th... See more...
Yes i have already read this but cannot seem to get it to work. Can you please show me what exactly needs to be added to my search to get it to work?   Below is what i have, but it does not show the trended sparkline when i run this for the past 7 days: index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | stats sum(b) as Usage by idx | eval Usage=round(Usage/1024/1024/1024,2) | rename idx AS index | stats sparkline count, avg(usage) by index | sort Usage
First, let me say thank you!  It certainly works and is exactly what I was looking for.  But man, isn't there any other easier way? lol
You could do something like this | chart sum(FRCount) as totalHourCount by QSDateOfTxn, QSHourOfTxn limit=1000 | eval row=mvrange(0,2) | mvexpand row | eval _metric=QSDateOfTxn." ".mvindex(split("to... See more...
You could do something like this | chart sum(FRCount) as totalHourCount by QSDateOfTxn, QSHourOfTxn limit=1000 | eval row=mvrange(0,2) | mvexpand row | eval _metric=QSDateOfTxn." ".mvindex(split("totalHourCount,PPM",","),row) | fields - QSDateOfTxn | rename row as _row | foreach * [| eval <<FIELD>>=if(_row=0,'<<FIELD>>',round('<<FIELD>>'/60,2))] | eval QSDateOfTxn=mvindex(split(_metric," "),0) | eval metric=mvindex(split(_metric," "),1) | table QSDateOfTxn metric *
When we run  get-brokersession  we have a txt file with time stamps in the txt file for events.  We can not get the date and time to show up in splunk.  We can get date only or time only.   When we t... See more...
When we run  get-brokersession  we have a txt file with time stamps in the txt file for events.  We can not get the date and time to show up in splunk.  We can get date only or time only.   When we try to do both, the parsed data stops at a line where a  timestamp is located with no output.
Hi Gregski, I was able to get Splunk to install via your steps, but it wont start as you mentioned. What steps do I need to do to get the SplunkD service to start. Currently i get a timeout error wh... See more...
Hi Gregski, I was able to get Splunk to install via your steps, but it wont start as you mentioned. What steps do I need to do to get the SplunkD service to start. Currently i get a timeout error when attempting to start it via CLI
Hello!  Wondering if someone can help me fine tune my query.  I have it very close but not quite what I want. Here is my query in addition to screenshot of what the results currently look like: [... See more...
Hello!  Wondering if someone can help me fine tune my query.  I have it very close but not quite what I want. Here is my query in addition to screenshot of what the results currently look like: [SomeSearch] | stats sum(FRCount) as totalHourCount by QSDateOfTxn, QSHourOfTxn limit=1000 | eval PPM = round(totalHourCount/60,2) | transpose 0 header_field=QSHourOfTxn I would like the date to be a column (it is possible there would be multiple dates) but still show both totalHourCount and PPM as is per every hour. Thanks, David  
How does Splunk AI assistant keep customer data confidential?
An easier, and perhaps more semantic method is to use JSON functions introduced in 8.1 to restructure data. (As we have seen before, you have developers who overload JSON's key name to convey data, w... See more...
An easier, and perhaps more semantic method is to use JSON functions introduced in 8.1 to restructure data. (As we have seen before, you have developers who overload JSON's key name to convey data, which is never a good thing.  If you have any influence on them, maybe ask them to change structure before it reaches data consumer.) With the data you illustrated, Splunk would have given you fields like key1.field1, key2.field2.  Iterate over them using foreach.   | foreach *.* [eval temp = json_object(), temp = json_set(temp, "Name A", "<<MATCHSEG1>>", "Name B", "<<MATCHSEG2>>", "Value", '<<FIELD>>'), reformat = mvappend(reformat, temp)] | mvexpand reformat | spath input=reformat | fields - _* key* temp    Your example results in Name A Name B Value reformat key1 field1 x {"Name A":"key1","Name B":"field1","Value":"x"} key2 field2 xx {"Name A":"key2","Name B":"field2","Value":"xx"} key3 field3 xxx {"Name A":"key3","Name B":"field3","Value":"xxx"} Here is an emulation you can play with and compare with real data   | makeresults | eval _raw="{ \"key1\": { \"field1\": \"x\" }, \"key2\": { \"field2\": \"xx\" }, \"key3\": { \"field3\": \"xxx\" } }" | spath ``` data emulation above ```  
Try something like this (index=aws-cpe-scl source=*winserver* "methodPath=POST:/scl/v1/equipment/router/*/claim/pods" responseJson "techMobile=true") OR (index=aws-cpe-osc ConnectionAgent "Node * co... See more...
Try something like this (index=aws-cpe-scl source=*winserver* "methodPath=POST:/scl/v1/equipment/router/*/claim/pods" responseJson "techMobile=true") OR (index=aws-cpe-osc ConnectionAgent "Node * connected:" model=PP203X) | rex "responseJson=(?<json>.*)" | eval routerMac = routerMac | eval techMobile = techMobile | eval status = status | spath input=json path=claimed{}.boxSerialNumber output=podSerialNumber | spath input=json path=claimed{}.locationId output=locationId | eval node_id = substr(podSerialNumber, 0, 10) | eval winClaimTime=if(index="aws-cpe-scl",strftime(_time,"%m/%d/%Y %H:%M:%S"),null()) | rex field=_raw "Node\s(?<node_id>\w+)\sconnected" | eval nodeFirstConnectedTime=if(index="aws-cpe-osc",strftime(_time,"%m/%d/%Y %H:%M:%S"),null()) | stats first(winClaimTime) as claimTime first(routerMac) as routerMac first(nodeFirstConnectedTime) as connectionTime by node_id
punct can be super helpful for trying to diagnose issues by looking at the kinds of events in your logs. I use this pretty regularly on Windows machines when there is an issue going on. index=wineve... See more...
punct can be super helpful for trying to diagnose issues by looking at the kinds of events in your logs. I use this pretty regularly on Windows machines when there is an issue going on. index=wineventlog | stats latest(_raw) as _raw count by source, punct | sort - count  
First log is: [2024-07-17T17:45:41.644Z] [INFO] [txnId=cz81j0kykru05f66jgsa9-1721238338661] [methodPath=POST:/scl/v1/equipment/router/88de7cd25f27/claim/pods] [routerMac=88de7cd25f27] [accountNumber... See more...
First log is: [2024-07-17T17:45:41.644Z] [INFO] [txnId=cz81j0kykru05f66jgsa9-1721238338661] [methodPath=POST:/scl/v1/equipment/router/88de7cd25f27/claim/pods] [routerMac=88de7cd25f27] [accountNumber=8337130011080779] [userName=p2197160] [status=200] [techMobile=true] [x-trace-id=1720705801769194081009825476769596142f411b7] [responseJson={"claimed":[{"bluetoothMac":"6c:ff:ce:cb:65:29","boxSerialNumber":"SA8A5001E0CHTRP","certificates":{"clientAfter":"1970-01-01T00:00:00.000Z","clientBefore":"2028-01-22T23:23:39.000Z","rootAfter":"2018-01-22T23:23:39.000Z","rootBefore":"2031-11-29T17:15:43.000Z"},"claimKeyRequired":false,"customerId":"6697fb785f7345000a00acea","deployment":"tau-prod","ethernet1Mac":"6c:ff:ce:cb:65:25","ethernetMac":"6c:ff:ce:cb:65:24","firmwareVersion":"2.4.3-72-g65b961c-prod","groupIds":["5d273b4a7f65315f752896b5"],"kvConfigs":[],"id":"SA8A5001E0CHTRP","locationId":"6697fb785f7345000a00aceb","model":"MAPV1S","packId":"f0d9323a-45db-48b8-aef3-acc8345b2913","partNumber":"253865327","partnerId":"5d273b4a7f65315f752896b5","purchaseOrderNumber":"PONA","radioMac24":"6c:ff:ce:cb:65:26","radioMac50U":"6c:ff:ce:cb:65:28","radioMac50L":"6c:ff:ce:cb:65:27","residentialGateway":false,"serialNumber":"SA8A5001E0CHTRP","shardNumber":"fakeShard","shipDate":"04/05/2022","subscriptionRequired":false,"subscriptionRequiredTerm":null,"unclaimable":false,"vendor":{"factory":"I","manufacturer":"SGCPTSN","mfgDate":"2022/13","name":"Plume","partNumber":"PP203X"},"createdAt":"2022-04-21T19:37:14.089Z","updatedAt":"2024-07-17T17:45:41.214Z","_id":"SA8A5001E0","_version":"1.1.0","backhaulDhcpPoolIdx":2}]}] Second log is: [2024-07-17 17:46:38.345UTC] [INFO ] [ConnectionAgent] [sessionId=51f9a46c-6e41-40f2-8406-e3bfbe97f420] - Node SA8A5001E0 connected: model=PP203X, firmware=4.2.0-91-g5cf406-prod, ovs=???, opensync=3.2.6.0, ip=100.101.206.57, vendorName=Plume So what I am doing is to fetch list of boxSerialNumber from log 1 and then check if log 2 is generated for same boxSerialNumber. I am using dedup sortby command as I need the first event timestamp. claimTime node_id routerMac 07/17/2024 11:44:16 SA8A5001E0  88de7cd25f27   connectionTime node_id 07/17/2024 11:44:54  SA8A5001E0   Final Result should be merge of these 2 tables based on node_id. claimTime node_id routerMac connectionTime 07/17/2024 11:44:16 SA8A5001E0  88de7cd25f27 07/17/2024 11:44:54  
Thank you. Though it might help someone new like me, never know.