All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Agreed @PickleRick  I've just done a test and epoch times work just fine with earliest and latest in a search. The formatting seems to be a red herring here.
I beg to differ. I've used earliest/latest with epoch timestamps many times.
Hi @VijaySrrie  Pls check this https://www.splunk.com/en_us/blog/platform/dashboard-studio-dashboard-customization-made-easy.html  
Hi @kareem  Please search in the lookups and in the lookup definitions the automatic lookup named "useragentstrings", probably it's missed or there are some missed fields. Could you pls copy paste ... See more...
Hi @kareem  Please search in the lookups and in the lookup definitions the automatic lookup named "useragentstrings", probably it's missed or there are some missed fields. Could you pls copy paste the whole SPL query (remove hostnames, important details, etc)
I can't say I've seen that form used in the wild.
The CSV file needs to have a header row that labels the fields in the file ("Date", "Field2", "Field3", etc.).  Then a query can use the lookup command to find a specific value in the CSV file and re... See more...
The CSV file needs to have a header row that labels the fields in the file ("Date", "Field2", "Field3", etc.).  Then a query can use the lookup command to find a specific value in the CSV file and return the matching fields. ... | lookup dummy.csv Date ``` Returns all fields by default ``` One limiting factor is lookups are exact matches.  The Date field would need to include the day of the week to match the CSV file.  You can set up a lookup definition (Settings->Lookups->Lookup definitions) that references the dummy.csv file, but also allows wildcard searching of the lookup table.  Go to the Advanced options, and put "WILDCARD(Date)" in the "Match type" box.  Then, it's a matter of putting wildcards ('*') in the CSV file's Date field in place of the day name.  
Good morning,  So I am trying to monitor all files within this directory /var/log/syslog/<IP> Directory structure: /var/log/syslog/<IP>/2024/01 | 02 | 03 | 04 | 05 | 06 | 07/secure | cron | mes... See more...
Good morning,  So I am trying to monitor all files within this directory /var/log/syslog/<IP> Directory structure: /var/log/syslog/<IP>/2024/01 | 02 | 03 | 04 | 05 | 06 | 07/secure | cron | messages Hope this makes sense there are multiple subdirectories, the end goal is to monitor secure, cron, and messages I wrote this stanza within inputs.conf and the configuration did take on the Universal Forwarder [monitor:///var/log/syslog/192.168.1.1/.../secure] disabled = false host_segment = 4 index = insght [monitor:///var/log/syslog/192.168.1.1/.../cron] disabled = false host_segment = 4 index = insght [monitor:///var/log/syslog/192.168.1.1/.../messages] disabled = false host_segment = 4 index = insght   I have also tried this to capture all subdirs/files [monitor:///var/log/syslog/192.168.1.1] disabled = false host_segment = 4 recursive = true index = insght   Also within _internal I get this message:  INFO TaillingProcess [#### MainTailingThread] - Parsing configuration stanza: monitor:///var/log/syslog/<IP>   Which seems to hang there with no other messages logged for the particular stanza(s)   IP Address used is notional, thanks for the help! 
Does anyone have a template of capabilities you think are necessary for a role specific to CISOs, ISSM/ISSO, and Analysts. I know we can probably just use the User and Power Users as a baseline but w... See more...
Does anyone have a template of capabilities you think are necessary for a role specific to CISOs, ISSM/ISSO, and Analysts. I know we can probably just use the User and Power Users as a baseline but was wondering if anyone had any other inputs or identified specific items they think those roles need.
Hmm, the documentation says map can use a subsearch 3. Use the map command with a subsearch For complex ad hoc searches, use a subsearch for your map search https://docs.splunk.com/Documentation/S... See more...
Hmm, the documentation says map can use a subsearch 3. Use the map command with a subsearch For complex ad hoc searches, use a subsearch for your map search https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Map#Basic_examples
I see another syntax error.  The map command expects its search string to be in quotation marks rather than as a subsearch.  The $earliest$ form doesn't work in subsearches (except in a dashboard). ... See more...
I see another syntax error.  The map command expects its search string to be in quotation marks rather than as a subsearch.  The $earliest$ form doesn't work in subsearches (except in a dashboard).   index=event sourcetype=eventdat | where like(details,"...")) | eval earliest=strftime(floor(_time), "%m/%d/%Y:%H:%M:%S"), latest=strftime(ceil(_time+2), "%m/%d/%Y:%H:%M:%S") | table _time details earliest latest | map maxsearches=10 search = "index=sys_stats sourcetype=statdat device="..." earliest=$earliest$ latest=$latest$ | stats count as counter | eval details=$details$, earliest=\"$earliest$\", latest=\"$latest$\" | table _time details counter earliest latest"    
I've been requested to identify unused knowledge objects. I'm honestly not sure on the best way to go about this request. I have checked the next scheduled time. I'm not sure if that's all i need to ... See more...
I've been requested to identify unused knowledge objects. I'm honestly not sure on the best way to go about this request. I have checked the next scheduled time. I'm not sure if that's all i need to do before contacting object owners. Any ideas or documentation to help me accomplish this task will be most appreciated. Thank you!
In Splunk ES and the platform, this error keeps appearing and I couldn't resolve it. Could not load lookup=LOOKUP-useragentstrings
Hi @richgalloway  Thanks for the tip, I've updated my query     index=event sourcetype=eventdat | where like(details,"...")) | eval earliest=strftime(floor(_time), "%m/%d/%Y:%H:%M:%S"), latest=s... See more...
Hi @richgalloway  Thanks for the tip, I've updated my query     index=event sourcetype=eventdat | where like(details,"...")) | eval earliest=strftime(floor(_time), "%m/%d/%Y:%H:%M:%S"), latest=strftime(ceil(_time+2), "%m/%d/%Y:%H:%M:%S") | table _time details earliest latest | map [ search index=sys_stats sourcetype=statdat device="..." earliest=$earliest$ latest=$latest$ | stats count as counter | eval details=$details$, earliest="$earliest$", latest="$latest$" | table _time details counter earliest latest] maxsearches=10     It's still throwing the error Invalid value "$earliest$" for time term 'earliest'     
Yes, this is possible. I can add the filter, but it will be 1to1, so 1 input per user to monitor. If you have 100s of users to monitor, that will become cumbersome to manage. Did you have an idea on... See more...
Yes, this is possible. I can add the filter, but it will be 1to1, so 1 input per user to monitor. If you have 100s of users to monitor, that will become cumbersome to manage. Did you have an idea on how multiple users would be configured in the UI?
Hi @yuanliu , Thank a lot you for your help on this matter. I apologize for the confusion earlier; it turns out that the building_from_index_search field did indeed contain multiple values per row, ... See more...
Hi @yuanliu , Thank a lot you for your help on this matter. I apologize for the confusion earlier; it turns out that the building_from_index_search field did indeed contain multiple values per row, which was causing the matching issues. Adding the | mvexpand building_from_index_search line helped by creating separate rows, which resolved the problem and allowed me to obtain the expected output. Now, I’d like to address a similar requirement. While the current approach effectively identifies the unique values in building_from_index_search that are not present in roomlookup_buildings.csv buildings, I also need to find unique values specifically in the roomlookup_buildings.csv buildings that do not appear in the building_from_index_search field. Could you help me with a query to achieve this? Thank you in advance for your assistance!
Hi @tomjb94 , is there some word (e.g. the word "preprod") or string that you can add to your main search (not replacing search by field but adding to it)? this approach will give you more speed in... See more...
Hi @tomjb94 , is there some word (e.g. the word "preprod") or string that you can add to your main search (not replacing search by field but adding to it)? this approach will give you more speed in your searches. index=idem attrs.GW2_ENV_CLASS=preprod http_status=5* http_status!=503 NOT "mon-tx-" preprod then, can you reduce the time window? if you have too many events, you couldaccelarate your searches scheduling a search and saving results in a summary index and then use this index for your searches. Ciao. Giuseppe
Hi -   I am currently looking to optimise the search below as it is using a lot of search head resource: index=idem attrs.GW2_ENV_CLASS=preprod http_status=5* http_status!=503 NOT "mon-tx-" Sam... See more...
Hi -   I am currently looking to optimise the search below as it is using a lot of search head resource: index=idem attrs.GW2_ENV_CLASS=preprod http_status=5* http_status!=503 NOT "mon-tx-" Sample JSON result set:     @timestamp: 2024-07-31T12:41:20+00:00 attrs.AWS_AMI_ID: attrs.AWS_AZ: eu-west-1c attrs.AWS_INSTANCE_ID: i-0591d93b5e5881da9 attrs.AWS_REGION: eu-west-1 attrs.GW2_APP_VERSION: attrs.GW2_ENV_CLASS: preprod attrs.GW2_ENV_NUMBER: 0 attrs.GW2_SERVICE: idem body_bytes: 1620 bytes_sent: 2060 client_cert_expire_in_days: 272 client_cert_expiry_date: Apr 30 10:11:07 2025 GMT client_cert_issuer_dn: CN=******* PROD SUB CA2,O=Fidelity National Information Services,L=Jacksonville,ST=Florida,C=US client_cert_verification: SUCCESS client_dn: CN=idem-semantic-monitoring-preprod,OU=Gateway2Cloudops,O=Fidelity National Information Services,L=London,C=GB container_id: 17b7167ec5f2d20ec10704550fc8f2c2b9daedc835ce5fe0828ac86651983517 container_name: /idem-kong-1 correlationId: hostname: 17b7167ec5f2 http_content_type: application/vnd.*******.idempotency-v1.0+json http_referer: http_status: 200 http_user_agent: curl/8.5.0 log: {"@timestamp": "2024-07-31T12:41:20+00:00", "correlationId": "", "request_method": "POST", "hostname": "17b7167ec5f2", "http_status": 200, "bytes_sent": 2060, "body_bytes": 1620, "request_length": 1689, "request": "POST /idempotency/entries/update HTTP/2.0", "http_user_agent": "curl/8.5.0", "http_referer": "", "body_bytes": 1620, "remote_addr": "10.140.49.156", "remote_user": "", "response_time_s": 0.007, "client_dn": "CN=idem-semantic-monitoring-preprod,OU=Gateway2Cloudops,O=Fidelity National Information Services,L=London,C=GB", "client_cert_issuer_dn": "CN=******* RSA PROD SUB CA2,O=Fidelity National Information Services,L=Jacksonville,ST=Florida,C=US", "client_cert_expiry_date": "Apr 30 10:11:07 2025 GMT", "client_cert_expire_in_days": "272", "client_cert_verification": "SUCCESS", "wpg_correlation_id": "mon-tx-ecs-1722429678-idem-pp-2.preprod.euw1.gw2.*******.io", "http_content_type": "application/vnd.******.idempotency-v1.0+json", "uri_path": "/idempotency/entries/update"} parser: json remote_addr: 10.140.49.156 remote_user: request: POST /idempotency/entries/update HTTP/2.0 request_length: 1689 request_method: POST response_time_s: 0.007 source: stdout uri_path: /idempotency/entries/update wpg_correlation_id: mon-tx-ecs-1722429678-idem-pp-2.preprod.euw1.gw2.*******.io   I have tried adding additional filtering on particular fields, but it is not having the desired effect. Please note, the wildcards in the JSON are where i have masked this for the purposes of this community case. Thanks,
Hi @nabeel652 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
It's not that $earliest$ is not being passed, it's that the value being passed is invalid.  The value for the earliest option must be a time modifier ("-1d", for example) or a timestamp in the format... See more...
It's not that $earliest$ is not being passed, it's that the value being passed is invalid.  The value for the earliest option must be a time modifier ("-1d", for example) or a timestamp in the format %m/%d/%Y:%H:%M:%S.  It cannot be an epoch timestamp, but you can use strftime to convert an epoch into the expected format. | eval earliest = strftime(earliest, "%m/%d/%Y:%H:%M:%S")  
If you are already receiving syslog on your rsyslog, it's better to send it to splunk using HEC input on Splunk's side and omhttp action on rsyslog's side.