All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good day, I have installed Splunk ES v9.2.1 on a Linux server (CentOS 7.9). On Splunk ES server, I have installed Splunk Addon for Unix and Linux with all scripts including rlog.sh enabled. I have a... See more...
Good day, I have installed Splunk ES v9.2.1 on a Linux server (CentOS 7.9). On Splunk ES server, I have installed Splunk Addon for Unix and Linux with all scripts including rlog.sh enabled. I have also configured Splunk Forwarder v9.2.1 a Linux client (CentOS-7.9). Splunk ES server is receiving logs from client as per normal. But I still have a problem with auditd logs coming from client. The USER_CMD part of auditd logs still appear as HEX format, instead of ASCII.    For example, part of the log reads   USER_CMD=636174202F6574632F736861646F77   where I expect a decoded value to be ASCII as in:   USER_CMD=cat /etc/shadow     What am I doing wrong? And what can I do to view auditd logs in Splunk without me as an analyst decode each log entry one at a time?
https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-gather-mobile-crash-stack-traces-using-the-API/ta-p/25622 I found this rather old post above, but it doesn't seem to be working.  Essenti... See more...
https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-gather-mobile-crash-stack-traces-using-the-API/ta-p/25622 I found this rather old post above, but it doesn't seem to be working.  Essentially, what we would like to do is this: - For the last x amount of time, programmatically gather mobile crash data (crash location, stack trace if available) I've tried using the commands from the post above, but our users only have a client secret.  The cookie doesn't seem to be valid. $ curl -X GET -c /tmp/cookie --user <username>:<clientsecret> "https://<controller domain>.saas.appdynamics.com/controller/auth?action=login" $ cat /tmp/cookie # Netscape HTTP Cookie File # https://curl.se/docs/http-cookies.html # This file was generated by libcurl! Edit at your own risk. <controller domain>.saas.appdynamics.com  FALSE  /controller  TRUE  0  JSESSIONID  node0sjew4rhlgia01p578fkyqraqw146492667.node0 $ curl -X POST -b /tmp/cookie  https://<controller domain>.saas.appdynamics.com/controller/restui/analyticsCrashDashboardUiService/getMobileCrashGroups <!DOCTYPE html> <html lang="en"> <head>     <meta charset="UTF-8">     <title>Unauthorized</title> </head> <body> HTTP Error 401 Unauthorized <p/> This request requires HTTP authentication </body> </html>
I need to perform an analysis based on a lookup file named checkin_rooms.csv, which includes a column confroom_ipaddress with values such as: 10.40.89.76 17.76.42.44 17.200.126.20 For each IP a... See more...
I need to perform an analysis based on a lookup file named checkin_rooms.csv, which includes a column confroom_ipaddress with values such as: 10.40.89.76 17.76.42.44 17.200.126.20 For each IP address in this file, I want to check the Splunk logs for the following conditions in the index=fow_checkin: There is a message containing "IpAddress(from request body)" There is no message associated with the same IP address that contains display button:panel-* in other events. Example Log Entries: message: Display Option Request Source: TouchPanel, IpAddress(from request body): null, Action: buttonDisplay, Timezone: null and IpAddress(from request header): 17.200.126.20 message: display button:panel-takeover for ipaddress: 17.200.126.20 Could someone please guide me on how to construct this query to identify which IP addresses from the lookup file meet these criteria? Thanks in advance
I have a license server where I have two indexer pools A and B configured. Pool A consists of a cluster of 5 indexers and an average consumption of 500GB. Pool B consists of 1 indexer and a consump... See more...
I have a license server where I have two indexer pools A and B configured. Pool A consists of a cluster of 5 indexers and an average consumption of 500GB. Pool B consists of 1 indexer and a consumption of 100GB per day. In pool B, data from an F5 index is forwarded to the indexer in pool A. My license consumption has increased to over 800GB total consumption. My question is: Is forwarding data from indexer B to indexer A causing me to consume more license? Would it help if I change the configuration to a single pool?  
Hello all, I have a query which creates a table similar to the following:   | table S42DSN_0001 S42DSN_0010   The table populates data within the S42DSN_0001 column, but not the S42DSN_0010 colu... See more...
Hello all, I have a query which creates a table similar to the following:   | table S42DSN_0001 S42DSN_0010   The table populates data within the S42DSN_0001 column, but not the S42DSN_0010 column.   I've double checked that there is definitely data captured within that field by looking at the events. There are 20 similarly named fields using the format S42DSN_00## which are found within the raw event data. Only the first 8 return results using the above query. For example the following works fine:   | table S42DSN_0001 S42DSN_0002   Any thoughts on why this might be happening? I am wondering if events past iteration S42DSN_0008 are not considered interesting, so Splunk is leaving them out of the results? Oddly enough, if I change my time period to the past 30 days, and use S42DSN_0010=* as a search criteria, I receive some, but not all results within that column. Thanks in advance, Trevor
Hello,  I am going to be sitting for the Core Certified User Exam in a week, and I just wanted to ask if there were any tips or advice somebody could give me. I have been prepping for a while as wel... See more...
Hello,  I am going to be sitting for the Core Certified User Exam in a week, and I just wanted to ask if there were any tips or advice somebody could give me. I have been prepping for a while as well as taking some udemy courses geared toward the exam. Anything helps!  
Hi Community, I need to calculate the difference between two timestamps printed in log4j logs of java application from 3 different searches, the timestamp is printed in the log after system time keyw... See more...
Hi Community, I need to calculate the difference between two timestamps printed in log4j logs of java application from 3 different searches, the timestamp is printed in the log after system time keyword in the logs. Logs for search -1 2024-07-18 06:11:23.438 INFO [ traceid=8d8f1bad8549e6ac6d1c864cbcb1f706 spanid=cdb1bb734ab9eedc ] com.filler.filler.filler.MessageLoggerVisitor [TLOG4-Thread-1-7] Jul 18,2024 06:11:23 GMT|91032|PRD|SYSTEM|test-01.Autodeploy-profiles-msgdeliver|10.12.163.65|-|-|-|-|com.filler.filler.filler.message.visitor.MessageLoggerVisitor|-|PRD01032 - Processor (Ingress Processor tlog-node4) processed message with system time 1721283083437 batch id d6e50727-ffe7-4db3-83a9-351e59148be2-23-0001 correlation-id (f00d9f9e-7534-4190-99ad-ffeea14859e5-23-0001) and body ( Logs for search -2 DFM01081 - Batch having id d6e50727-ffe7-4db3-83a9-351e59148be2-23-0001 on processor-name Egress Processor, transaction status commited by consumer Logs for search-3 2024-07-18 06:11:23.487 INFO [ traceid= spanid= ] com.filler.filler.filler.message.processor.RestPublisherProcessor [PRD-1] Jul 18,2024 06:11:23 GMT|91051|PRD|SYSTEM|test-01.Autodeploy-profiles-msgdeliver|10.12.163.65|-|-|-|-|com.filler.filler.filler.message.processor.RestPublisherProcessor|-|PRD01051 - Message with correlation-id f00d9f9e-7534-4190-99ad-ffeea14859e5-23-0001 successfully published at system time 1721283083487 to MCD I am using below query to calculate the time difference. I need to filter out the correlation ids in search 1not matching the batch ids from search 2 and calculate the systime difference from the matching correlation ids b/w search-1 and search-2 which also match with search-3. The below query gives empty systime_mcd need help in getting this through sourcetype=log4j | rex "91032\|PRD\|SYSTEM\|test\-01\.Autodeploy\-profiles\-msgdeliver\|10\.12\.163\.65\|\-\|\-\|\-\|\-\|com\.filler\.filler\.filler\.message\.visitor\.MessageLoggerVisitor\|\-\|PRD01032 \- Processor (.*?) processed message with system time (?.+) batch id (?.+) correlation-id \((?.+)\) and body" |rex "DFM01081 \- Batch having id (?.+) on processor-name Egress Processor\, transaction status commited by consumer | rex "com\.filler\.filler.filler\.message\.processor\.RestPublisherProcessor\|\-\|PRD01051 \- Message with correlation\-id \((?.+)\) successfully published at system time (?.+) to MCD" | stats first(systime_batch) as systime_batch values(systime_mcd) as systime_mcd values(corrid) as corrid by batch_id_passed | mvexpand corrid | eval diff = (systime_mcd-systime_batch) @ITWhisperer   can you please look into this as well, this is an extension of what you already helped with. Thanks in advance
SO the other day, I was asked to ingest some data for jenkins, and Splunk has seemed to only ingest some of that data.  I have this monitor installed on both the Production and Development remote in... See more...
SO the other day, I was asked to ingest some data for jenkins, and Splunk has seemed to only ingest some of that data.  I have this monitor installed on both the Production and Development remote instances:       [monitor:///var/lib/jenkins/jobs.../log] recursive = true index = azure sourcetype = jenkins disabled = 0 [monitor:///var/lib/jenkins/jobs] index = jenkins sourcetype = jenkins disabled = 0 recursive = true #[monitor:///var/lib/jenkins/jobs/web-pipeline/branches/develop/builds/14] #index = testing #sourcetype = jenkins #recursive = true #disabled = 0         Pretty much, I have most of the data ingested, but for whatever reason, I cant find any data for  /var/lib/jenkins/jobs/web-pipeline/branches/develop/builds/14, or other random paths that we spot check.  For that bottom commented out input, I specify the entire path and I even added a salt so we could re ingest it.  Its commented out rn, but i have tried different iterations for that specific path.    It has and continues to ingest everything under that /var/lib/jenkins/jobs, but i do not see some of the data.  Based on this input, should i be doing something else? Could it be an issue with having the same sourcetype as the data that is funneled to the azure index? Is the syntax incorrect? I want to ingest EVRYTHING, including files within subdirectories into splunk. Thats why i used recursive, but is that not enough?    Thanks for any help. 
Hi Team, I have containerized sc4s hosts which have ufs installed  but sc4s is forwarding data via HEC, i want to see the total logging size per host or sc4s source, can someone help me with the que... See more...
Hi Team, I have containerized sc4s hosts which have ufs installed  but sc4s is forwarding data via HEC, i want to see the total logging size per host or sc4s source, can someone help me with the query to get that data .
Why was Windows Server 2016 removed from Splunk Universal Forwarder as of v9.3 (7/30/2024), when Windows Server 2016 is still under extended support until 2027?
I'm working with a 9.1.2 UF on Linux.  This is the props.conf   [stanza] # # Input-time operation on Forwarders # LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TRUNCATE =... See more...
I'm working with a 9.1.2 UF on Linux.  This is the props.conf   [stanza] # # Input-time operation on Forwarders # LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TRUNCATE = 999 DATETIME_CONFIG = CURRENT   This is the contents of the file   Splunk Reporting Hosts as of 07/31/2024 12:05:01 UTC host hostname1 hostname2 hostname3 hostname4 ... hostname1081   There are 1,083 lines in the file.  I used od -cx to verify there is \n at the end of each line.  For some reason, the last entry from a search consists of the first 257 lines from the file, and then the remaining lines are individual entries.  I didn't have DATETIME_CONFIG in the stanza, so I thought that might be the issue.  It is now, and it is still an issue.  I'm out of ideas.  Anyone see this before or have an idea on how to resolve this? TIA, Joe
Hi, I'm trying to design a distributed architecture of Splunk for my company, and I need to pitch the design to them. I need to know the total number of servers required and each system's specificati... See more...
Hi, I'm trying to design a distributed architecture of Splunk for my company, and I need to pitch the design to them. I need to know the total number of servers required and each system's specifications.    How can I start with this? I have little knowledge of splunk admin parts mainly because I am a developer.   Users/day can be less than 1000 and the indexing volume should be around 5 GB/day.    Can anyone please recommend something where to start?
Good morning,  So I am trying to monitor all files within this directory /var/log/syslog/<IP> Directory structure: /var/log/syslog/<IP>/2024/01 | 02 | 03 | 04 | 05 | 06 | 07/secure | cron | mes... See more...
Good morning,  So I am trying to monitor all files within this directory /var/log/syslog/<IP> Directory structure: /var/log/syslog/<IP>/2024/01 | 02 | 03 | 04 | 05 | 06 | 07/secure | cron | messages Hope this makes sense there are multiple subdirectories, the end goal is to monitor secure, cron, and messages I wrote this stanza within inputs.conf and the configuration did take on the Universal Forwarder [monitor:///var/log/syslog/192.168.1.1/.../secure] disabled = false host_segment = 4 index = insght [monitor:///var/log/syslog/192.168.1.1/.../cron] disabled = false host_segment = 4 index = insght [monitor:///var/log/syslog/192.168.1.1/.../messages] disabled = false host_segment = 4 index = insght   I have also tried this to capture all subdirs/files [monitor:///var/log/syslog/192.168.1.1] disabled = false host_segment = 4 recursive = true index = insght   Also within _internal I get this message:  INFO TaillingProcess [#### MainTailingThread] - Parsing configuration stanza: monitor:///var/log/syslog/<IP>   Which seems to hang there with no other messages logged for the particular stanza(s)   IP Address used is notional, thanks for the help! 
Does anyone have a template of capabilities you think are necessary for a role specific to CISOs, ISSM/ISSO, and Analysts. I know we can probably just use the User and Power Users as a baseline but w... See more...
Does anyone have a template of capabilities you think are necessary for a role specific to CISOs, ISSM/ISSO, and Analysts. I know we can probably just use the User and Power Users as a baseline but was wondering if anyone had any other inputs or identified specific items they think those roles need.
I've been requested to identify unused knowledge objects. I'm honestly not sure on the best way to go about this request. I have checked the next scheduled time. I'm not sure if that's all i need to ... See more...
I've been requested to identify unused knowledge objects. I'm honestly not sure on the best way to go about this request. I have checked the next scheduled time. I'm not sure if that's all i need to do before contacting object owners. Any ideas or documentation to help me accomplish this task will be most appreciated. Thank you!
In Splunk ES and the platform, this error keeps appearing and I couldn't resolve it. Could not load lookup=LOOKUP-useragentstrings
Hi -   I am currently looking to optimise the search below as it is using a lot of search head resource: index=idem attrs.GW2_ENV_CLASS=preprod http_status=5* http_status!=503 NOT "mon-tx-" Sam... See more...
Hi -   I am currently looking to optimise the search below as it is using a lot of search head resource: index=idem attrs.GW2_ENV_CLASS=preprod http_status=5* http_status!=503 NOT "mon-tx-" Sample JSON result set:     @timestamp: 2024-07-31T12:41:20+00:00 attrs.AWS_AMI_ID: attrs.AWS_AZ: eu-west-1c attrs.AWS_INSTANCE_ID: i-0591d93b5e5881da9 attrs.AWS_REGION: eu-west-1 attrs.GW2_APP_VERSION: attrs.GW2_ENV_CLASS: preprod attrs.GW2_ENV_NUMBER: 0 attrs.GW2_SERVICE: idem body_bytes: 1620 bytes_sent: 2060 client_cert_expire_in_days: 272 client_cert_expiry_date: Apr 30 10:11:07 2025 GMT client_cert_issuer_dn: CN=******* PROD SUB CA2,O=Fidelity National Information Services,L=Jacksonville,ST=Florida,C=US client_cert_verification: SUCCESS client_dn: CN=idem-semantic-monitoring-preprod,OU=Gateway2Cloudops,O=Fidelity National Information Services,L=London,C=GB container_id: 17b7167ec5f2d20ec10704550fc8f2c2b9daedc835ce5fe0828ac86651983517 container_name: /idem-kong-1 correlationId: hostname: 17b7167ec5f2 http_content_type: application/vnd.*******.idempotency-v1.0+json http_referer: http_status: 200 http_user_agent: curl/8.5.0 log: {"@timestamp": "2024-07-31T12:41:20+00:00", "correlationId": "", "request_method": "POST", "hostname": "17b7167ec5f2", "http_status": 200, "bytes_sent": 2060, "body_bytes": 1620, "request_length": 1689, "request": "POST /idempotency/entries/update HTTP/2.0", "http_user_agent": "curl/8.5.0", "http_referer": "", "body_bytes": 1620, "remote_addr": "10.140.49.156", "remote_user": "", "response_time_s": 0.007, "client_dn": "CN=idem-semantic-monitoring-preprod,OU=Gateway2Cloudops,O=Fidelity National Information Services,L=London,C=GB", "client_cert_issuer_dn": "CN=******* RSA PROD SUB CA2,O=Fidelity National Information Services,L=Jacksonville,ST=Florida,C=US", "client_cert_expiry_date": "Apr 30 10:11:07 2025 GMT", "client_cert_expire_in_days": "272", "client_cert_verification": "SUCCESS", "wpg_correlation_id": "mon-tx-ecs-1722429678-idem-pp-2.preprod.euw1.gw2.*******.io", "http_content_type": "application/vnd.******.idempotency-v1.0+json", "uri_path": "/idempotency/entries/update"} parser: json remote_addr: 10.140.49.156 remote_user: request: POST /idempotency/entries/update HTTP/2.0 request_length: 1689 request_method: POST response_time_s: 0.007 source: stdout uri_path: /idempotency/entries/update wpg_correlation_id: mon-tx-ecs-1722429678-idem-pp-2.preprod.euw1.gw2.*******.io   I have tried adding additional filtering on particular fields, but it is not having the desired effect. Please note, the wildcards in the JSON are where i have masked this for the purposes of this community case. Thanks,
Hi Splukers I'm looking for cross compare some events with other system data, using an initial search for the event and then using map to load data from another index   index=event sourcetype=even... See more...
Hi Splukers I'm looking for cross compare some events with other system data, using an initial search for the event and then using map to load data from another index   index=event sourcetype=eventdat | where like(details,"...")) | eval earliest=floor(_time), latest=ceil(_time+2) | table _time details earliest latest | map [ search index=sys_stats sourcetype=statdat device="..." earliest=$earliest$ latest=$latest$ | stats count as counter | eval details=$details$, earliest=$earliest$, latest=$latest$ | table _time details counter earliest latest] maxsearches=10     When running I get the error: Invalid value "$earliest$" for time term 'earliest' I've tried $$ and "$...$" with no luck. I can't figure out why $earliest$ isn't being passed.   Any help would be appreciated (:   Notes: I've reviewed these posts but they don't seem relevant https://community.splunk.com/t5/Splunk-Search/Invalid-value-X-for-time-term-earliest-but-only-for-specific/m-p/624962#M217251 https://community.splunk.com/t5/Splunk-Search/Invalid-value-quot-week-quot-for-time-term-earliest/m-p/469491#M132104  
Hi All, I have a percentage value in one field in  a dashboard studio. I need to add below colours 0% - 5% Green 5% - 10% Yellow others RED
Hi  Can someone tell me how we can use a csv file using a lookup and extract the details from a file in a field which we can use for further calculations.  Example: A csv file (dummy.csv) with the ... See more...
Hi  Can someone tell me how we can use a csv file using a lookup and extract the details from a file in a field which we can use for further calculations.  Example: A csv file (dummy.csv) with the below details are saved in Splunk and we need to extract the details present in the file after the date in a new field in SPlunk and use the new field for further calculations.  Data in the dummy.csv file :  "Monday,01/07/2024",T2S Live Timing,"[OTHER] BILL invoice for CSDs Billing period 10-30 June ",,,,,, "Tuesday,02/07/2024",, ,,,,,, "Wednesday,03/07/2024",,"[OTHER] BILL invoice for NCBs Billing period 10-30 June",,,,,, "Thursday,04/07/2024",, ,[OTHER] DKK Service window between 19.35 - 23.59 ,,,,, "Friday,05/07/2024",T2S Synchronised Release day,,,,,,, "Saturday,06/07/2024",,[4CB] T2-T2S Site Recovery (internal technical test) ,[4CB] T2-T2S Site Recovery (internal technical test) ,,,,, "Sunday,07/07/2024",,[4CB] T2-T2S Site Recovery (internal technical test) ,[4CB] T2-T2S Site Recovery (internal technical test) ,,,,, "Monday,08/07/2024",T2S Live Timing, ,,,,,, How we can use the lookup and eval command to find the data present in the above file after the date ??  Example :  Date = 01/07/2024  Output = T2S Live Timing Date = 02/07/2024  Output = Blank Space  Date = 03/07/2024  Output = Blank Space  Date = 04/07/2024  Output = Blank Space  Date = 05/07/2024  Output = T2S Synchronised Release day