All Topics

Top

All Topics

Splunk Enterprise: 9.0.3 (Linux) Splunk Add-on for Microsoft Windows: 8.9.0 Data source: Windows Server 2016 Data format: XML When extracting EventIDs from XML data the EventID is _not_ extracted... See more...
Splunk Enterprise: 9.0.3 (Linux) Splunk Add-on for Microsoft Windows: 8.9.0 Data source: Windows Server 2016 Data format: XML When extracting EventIDs from XML data the EventID is _not_ extracted if there's a "Qualifiers" attribute. Only the "Qualifiers" field is then extracted - see screenshot. Is this intentionally?
Hi I want to extract the highlighted part RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; So... See more...
Hi I want to extract the highlighted part RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] 17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655]
Hi , I want to extract the color part. RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Sourc... See more...
Hi , I want to extract the color part. RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] 17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655]
When running a search on the Incident Review dashboard where the search term is the <event_id> value or event_id="<event_id>", there are no results. It used to work in the past, and in one of the la... See more...
When running a search on the Incident Review dashboard where the search term is the <event_id> value or event_id="<event_id>", there are no results. It used to work in the past, and in one of the last updates, it stopped working. I am using Enterprise Security version 7.3.2
I am testing the SmartStore setup on S3 with Splunk Enterprise running on an EC2 instance. I am attempting this with an IAM role that has full S3 access. When I included the access keys in indexes... See more...
I am testing the SmartStore setup on S3 with Splunk Enterprise running on an EC2 instance. I am attempting this with an IAM role that has full S3 access. When I included the access keys in indexes.conf and started the instance, SmartStore successfully started. However, when I assigned the IAM role permissions to the EC2 instance and removed the key information from indexes.conf, Splunk froze at the loading screen with indexes.conf.... Running AWS commands shows that various files from S3 are listed. Below is the indexes.conf. During the loading process, Splunk freezes and does not start. The splunkd.log shows a shutdown message at the end. If I re-enter the key information in indexes.conf, it works again. I want to operate this using the IAM role.   [default] remotePath = volume:rstore/$_index_name [volume:rstore] storageType = remote path = s3://S3バケット名 remote.s3.endpoint = https://s3.ap-northeast-1.amazonaws.com    
I'm new to splunk and really struggle very hard with it's documentation. Everytime I try to do something, it does not work as documented. I'm pretty fluent with free tool named jq, but it requires t... See more...
I'm new to splunk and really struggle very hard with it's documentation. Everytime I try to do something, it does not work as documented. I'm pretty fluent with free tool named jq, but it requires to downloading the data from splunk to process it, which is very inconvenient to do over globe. I have some query producing jsons. I'd like to do this trivial thing. Extract data from field json.msg (trivial projection), parse them as json, then proceed further. In jq this is as hard as: '.json.msg | fromjson '  Done. Can someone advice how to do this in splunk? I tried: … | spath input=json.msg output=msg_raw path=json.msg and multiple variants of that, but it either does not compile (say if path is missing) or do nothing. … spath input=json.msg output=msg_raw path=json.msg | table msg_raw prints empty lines.  I need to do much more complex things with it(reductions/aggregations/deduplications) all trivial in jq, but even this is not doable in splunk query. How to do? Or where is valid documention showing things which works?
Trying to fix a corruption issue with a _metrics bucket, using the "./splunk rebuild <path> command. Doing this, i recieve the following WARN "Fsck - Rebuilding entire bucket is not supported for "m... See more...
Trying to fix a corruption issue with a _metrics bucket, using the "./splunk rebuild <path> command. Doing this, i recieve the following WARN "Fsck - Rebuilding entire bucket is not supported for "metric" bucket that has a "stubbed-out" rawdata journal. Only bloomfilter will be build" How would i rebuild the metrics bucket to fix the error?
As a newbie I am currently working on a mini internship project which requires me to analyse a dataset using splunk. I have completed almost all but the last part of it which reads  "gender that perf... See more...
As a newbie I am currently working on a mini internship project which requires me to analyse a dataset using splunk. I have completed almost all but the last part of it which reads  "gender that performed the most fraudulent activities and in what category". Basically im supposed to get the gender (F or M) that performed the most fraud in specifically in what category. The dataset which consists of a column of  steps, customer, age,gender, Postcodeorigin, merchant, category,amount and fround from a file name fraud_report.csv . The file has already been uploaded to splunk.  I am just stuck at the query part.
Hello,  Based on this Splunk Query:   index=* AND appid=127881 AND message="*|NGINX|*" AND cluster != null AND namespace != null | eval server = (namespace + "@" + cluster) | timechart span=1... See more...
Hello,  Based on this Splunk Query:   index=* AND appid=127881 AND message="*|NGINX|*" AND cluster != null AND namespace != null | eval server = (namespace + "@" + cluster) | timechart span=1d count by server Because the logs are only kept for 1 month, and in recent month, logs are only in server 127881-p@23p. So in the splunk query result, we only can see 1 column: 127881-p@23p   May I ask how to make the result has 3 columns: 127881-p@23p, 127881-p@24p, 127881-p@25p And since there is no logs in 24p and 25p rencently, the values for 24p and 25p are 0.   Thanks a lot!  
Hi All, I have written a macro to get a field. It has 3 joins. When i used the macro in dashboard , in a base search, it is not working properly and gives very less results. But when i use the macr... See more...
Hi All, I have written a macro to get a field. It has 3 joins. When i used the macro in dashboard , in a base search, it is not working properly and gives very less results. But when i use the macro in search bar it gives correct results. Does anyone know how can i solve this?
Hello Everybody, 6 days ago successfully passed Splunk Admin certification exam, but I can't find certificate in Splunk and Pearson Vue sites. Where I can find and download certification?
Hello, In my Splunk web service, we have the domain, for example: https://splunksh.com  The problems is anyone can get access to https://splunksh.com/config without login. Although the page doesn't... See more...
Hello, In my Splunk web service, we have the domain, for example: https://splunksh.com  The problems is anyone can get access to https://splunksh.com/config without login. Although the page doesn't contain any sensitive data, our Cyber Security team deem it as a vulnability that need to be fix. I want to know how to either disable that url, or redirect it to the login page. Any help would be very apriciate. 
Hello everyone, I have collected some firewall traffic data: two firewalls(fw1/fw2), each has two interfaces(ethernet1/1&2),  will collect rxbytes and txbytes every 5 minutes.  The raw data is sh... See more...
Hello everyone, I have collected some firewall traffic data: two firewalls(fw1/fw2), each has two interfaces(ethernet1/1&2),  will collect rxbytes and txbytes every 5 minutes.  The raw data is showed as below: >>> {"timestamp": 1726668551, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59947791867743, "txbytes": 37019023811192} {"timestamp": 1726668551, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63755935850903, "txbytes": 32252936430552} {"timestamp": 1726668551, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668551, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668851, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59948210937804, "txbytes": 37019791801583} {"timestamp": 1726668851, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63755965708078, "txbytes": 32253021060643} {"timestamp": 1726668851, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668851, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669151, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59948636904106, "txbytes": 37020560028933} {"timestamp": 1726669151, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756002542165, "txbytes": 32253111011234} {"timestamp": 1726669151, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669151, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669451, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949094737896, "txbytes": 37021330717977} {"timestamp": 1726669451, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756101313559, "txbytes": 32253199085252} {"timestamp": 1726669451, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669451, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669752, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949550987330, "txbytes": 37022105630147} {"timestamp": 1726669752, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756167141302, "txbytes": 32253286546113} {"timestamp": 1726669752, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669752, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726670052, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949968397016, "txbytes": 37022870539739} {"timestamp": 1726670052, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756401499253, "txbytes": 32253380028970} {"timestamp": 1726670052, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726670052, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} <<< Now I need to create one chart to show the value of "rxbytes" over time, with 4 series: (series 1) fw1, interface1/1 (series 2) fw1, interface1/2 (series 3) fw2, interface1/1 (series 4) fw2, interface1/2 But I have problem to compose the SPL statement for this purpose. can you please help here? thank you in advance!
I want to count user_ids that appear more than once per month. (ie a user that has used the product multiple times).  I've tried a few variations such as : search XXX | dedup XXX | stats count by u... See more...
I want to count user_ids that appear more than once per month. (ie a user that has used the product multiple times).  I've tried a few variations such as : search XXX | dedup XXX | stats count by user_id | where count >1 but can't seem to get it to work. Hoping to be able to display the count as a single number as well as timechart it so I can show the number over the last X months.. Any suggestions? It feels like it should've been easier than it has been!
Hello  I have a requirement to show one of my Splunk Cloud dashboards embedded on SharePoint on-prem page. Trying to use iframe for that purpose but get an error "Connection Refused". Any ideas... See more...
Hello  I have a requirement to show one of my Splunk Cloud dashboards embedded on SharePoint on-prem page. Trying to use iframe for that purpose but get an error "Connection Refused". Any ideas, or anyone has tried this?
Both pages for Inputs and Configuration are not working on the Palo Alto Networks Add-on.  I am using a Splunk Enterprise Standalone with an NFR 50GB license. I've never had problems with this TA be... See more...
Both pages for Inputs and Configuration are not working on the Palo Alto Networks Add-on.  I am using a Splunk Enterprise Standalone with an NFR 50GB license. I've never had problems with this TA before, but recently I've redone my test environment to migrate from a CentOS to RHEL, so I reinstalled Splunk with the latest version and all apps on their latest versions as well. Here are the errors:   What am I doing wrong to get these errors?   
Are there any plans to support this app on cloud.
Hello! I am new to splunk and aws. I just set up splunk in a linux server in aws.  I want to now ingest sample data into AWS to forward it so I can view it in splunk. I know i need to use the univer... See more...
Hello! I am new to splunk and aws. I just set up splunk in a linux server in aws.  I want to now ingest sample data into AWS to forward it so I can view it in splunk. I know i need to use the universal forwarder. but how do i actually go about ingesting the data into a S3 bucket to then be forwarded to splunk.  Yes I know i can ingest sample data straight into splunk but I am trying to get real world experience to get a job in cybersecurity!
Hi, we are decomisioning our splunk infra, our company was purchased and the new management want to free resources :(. We have 3 search heads (stand alone) + 2 indexers (clustered). They ask me to ... See more...
Hi, we are decomisioning our splunk infra, our company was purchased and the new management want to free resources :(. We have 3 search heads (stand alone) + 2 indexers (clustered). They ask me to break the indexer cluster to free storage, cpu and mem, i've found docs about removing nodes but keeping the cluster.  We want to keep just one search head (the one with license master) and one indexer.  Is there documentation to "break" the cluster and keep just one indexer in stand alone mode? (we need to keep info for "auditing reasons").  I know i can just put one in maintenance mode and power off but this procedure is intended to reboot/replace in some time the "faulty" indexer, not to keep it down for ever and ever.  Regards.  
I have ClamAV running on all my linux hosts (universal forwarders) and all logsseems to be fine except clamav logs. ClamAV scan report has unusual log format (see below). I need help with how to inge... See more...
I have ClamAV running on all my linux hosts (universal forwarders) and all logsseems to be fine except clamav logs. ClamAV scan report has unusual log format (see below). I need help with how to ingest that report. Splunk (splunkd.log) shows error when I try to ingest it. I think, I need to setup a props.conf but I am not sure, how to go about doing it. This is an air gapped system, just FYI.  splunkd.log ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/var/log/audit/clamav_scan_20240916_111846.log). Last time we saw this, filename was different. You may wish to use larger initCrcLen for this sourcetype or a CRC salt on this source. Clamav scan generates log file as shown below: -----------SCAN SUMMARY-------------- Known Viruses: xxxxxx Engine Version: x.xx.x Scanned Directories: xxx Scanned Files: xxxxx Infected Files: x Data Scanned: xxxxMB Data Read: xxxxMB Time: Start Date: 2024:09:16 14:46:58 End Date: 2024:09:16 16:33:06