All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Ref Doc - Splunk Add-on for GCP Docs Currently, the Cloud Storage Bucket input doesn’t support pre-processing of data, such as untar/unzip/ungzip/etc. The data must be pre-processed and ready for in... See more...
Ref Doc - Splunk Add-on for GCP Docs Currently, the Cloud Storage Bucket input doesn’t support pre-processing of data, such as untar/unzip/ungzip/etc. The data must be pre-processed and ready for ingestion in a UTF-8 parseable format
Hi All,  I already configure ingestion log from fortigate using syslog , the log send using UDP by port 514.  I also setup data inputs in splunk enterprise to recieve the data from port 514.  Wh... See more...
Hi All,  I already configure ingestion log from fortigate using syslog , the log send using UDP by port 514.  I also setup data inputs in splunk enterprise to recieve the data from port 514.  When I perform tcp dump from splunk vm , the data successfully flowing from fortigate to splunk vm, but when I search the data from splunk web, there is no data appear.  Currently I ingest the data to 1 indexer, and search the data from another search head.  Please give me an advise to solve my issue.    Thankyou
I have a search that links problem and problem task tables with a scenario that gives unexpected results My search brings back the latest ptasks against the problem but I have identified some task... See more...
I have a search that links problem and problem task tables with a scenario that gives unexpected results My search brings back the latest ptasks against the problem but I have identified some tasks that were closed as duplicate after the last update on the active tasks (`servicenow` sourcetype="problem" latest=@mon) OR (`servicenow` sourcetype="problem_task" latest=@mon dv_u_review_type="On Hold") | eval problem=if(sourcetype="problem",number,dv_problem) | stats values(eval(if(sourcetype="problem_task",number,null()))) as number, latest(eval(if(sourcetype="problem_task",active,null()))) as task_active, latest(eval(if(sourcetype="problem_task", dv_u_review_type,null()))) as dv_u_review_type, latest(eval(if(sourcetype="problem_task",dv_due_date,null()))) as task_due, latest(eval(if(sourcetype="problem",dv_opened_at,null()))) as prb_opened, latest(eval(if(sourcetype="problem",dv_active,null()))) as prb_active by problem | fields problem, number, task_active, dv_u_review_type, task_due, prb_opened, prb_active | where problem!="" Is it possible to mark an event that is closed as out of scope then disclude all the events of the same number?
Morning All  I am trying to work out how to use splunk spl to pick random names from a list i have 1 field called 'displayName'. there are over 200 entries and i'd like to use Splunk to pick 5 rand... See more...
Morning All  I am trying to work out how to use splunk spl to pick random names from a list i have 1 field called 'displayName'. there are over 200 entries and i'd like to use Splunk to pick 5 random names    appreciate help in this Paula    
Splunk Enterprise: 9.0.3 (Linux) Splunk Add-on for Microsoft Windows: 8.9.0 Data source: Windows Server 2016 Data format: XML When extracting EventIDs from XML data the EventID is _not_ extracted... See more...
Splunk Enterprise: 9.0.3 (Linux) Splunk Add-on for Microsoft Windows: 8.9.0 Data source: Windows Server 2016 Data format: XML When extracting EventIDs from XML data the EventID is _not_ extracted if there's a "Qualifiers" attribute. Only the "Qualifiers" field is then extracted - see screenshot. Is this intentionally?
Hi I want to extract the highlighted part RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; So... See more...
Hi I want to extract the highlighted part RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] 17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655]
Hi , I want to extract the color part. RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Sourc... See more...
Hi , I want to extract the color part. RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] 17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655]
When running a search on the Incident Review dashboard where the search term is the <event_id> value or event_id="<event_id>", there are no results. It used to work in the past, and in one of the la... See more...
When running a search on the Incident Review dashboard where the search term is the <event_id> value or event_id="<event_id>", there are no results. It used to work in the past, and in one of the last updates, it stopped working. I am using Enterprise Security version 7.3.2
I am testing the SmartStore setup on S3 with Splunk Enterprise running on an EC2 instance. I am attempting this with an IAM role that has full S3 access. When I included the access keys in indexes... See more...
I am testing the SmartStore setup on S3 with Splunk Enterprise running on an EC2 instance. I am attempting this with an IAM role that has full S3 access. When I included the access keys in indexes.conf and started the instance, SmartStore successfully started. However, when I assigned the IAM role permissions to the EC2 instance and removed the key information from indexes.conf, Splunk froze at the loading screen with indexes.conf.... Running AWS commands shows that various files from S3 are listed. Below is the indexes.conf. During the loading process, Splunk freezes and does not start. The splunkd.log shows a shutdown message at the end. If I re-enter the key information in indexes.conf, it works again. I want to operate this using the IAM role.   [default] remotePath = volume:rstore/$_index_name [volume:rstore] storageType = remote path = s3://S3バケット名 remote.s3.endpoint = https://s3.ap-northeast-1.amazonaws.com    
I'm new to splunk and really struggle very hard with it's documentation. Everytime I try to do something, it does not work as documented. I'm pretty fluent with free tool named jq, but it requires t... See more...
I'm new to splunk and really struggle very hard with it's documentation. Everytime I try to do something, it does not work as documented. I'm pretty fluent with free tool named jq, but it requires to downloading the data from splunk to process it, which is very inconvenient to do over globe. I have some query producing jsons. I'd like to do this trivial thing. Extract data from field json.msg (trivial projection), parse them as json, then proceed further. In jq this is as hard as: '.json.msg | fromjson '  Done. Can someone advice how to do this in splunk? I tried: … | spath input=json.msg output=msg_raw path=json.msg and multiple variants of that, but it either does not compile (say if path is missing) or do nothing. … spath input=json.msg output=msg_raw path=json.msg | table msg_raw prints empty lines.  I need to do much more complex things with it(reductions/aggregations/deduplications) all trivial in jq, but even this is not doable in splunk query. How to do? Or where is valid documention showing things which works?
Trying to fix a corruption issue with a _metrics bucket, using the "./splunk rebuild <path> command. Doing this, i recieve the following WARN "Fsck - Rebuilding entire bucket is not supported for "m... See more...
Trying to fix a corruption issue with a _metrics bucket, using the "./splunk rebuild <path> command. Doing this, i recieve the following WARN "Fsck - Rebuilding entire bucket is not supported for "metric" bucket that has a "stubbed-out" rawdata journal. Only bloomfilter will be build" How would i rebuild the metrics bucket to fix the error?
As a newbie I am currently working on a mini internship project which requires me to analyse a dataset using splunk. I have completed almost all but the last part of it which reads  "gender that perf... See more...
As a newbie I am currently working on a mini internship project which requires me to analyse a dataset using splunk. I have completed almost all but the last part of it which reads  "gender that performed the most fraudulent activities and in what category". Basically im supposed to get the gender (F or M) that performed the most fraud in specifically in what category. The dataset which consists of a column of  steps, customer, age,gender, Postcodeorigin, merchant, category,amount and fround from a file name fraud_report.csv . The file has already been uploaded to splunk.  I am just stuck at the query part.
Hello,  Based on this Splunk Query:   index=* AND appid=127881 AND message="*|NGINX|*" AND cluster != null AND namespace != null | eval server = (namespace + "@" + cluster) | timechart span=1... See more...
Hello,  Based on this Splunk Query:   index=* AND appid=127881 AND message="*|NGINX|*" AND cluster != null AND namespace != null | eval server = (namespace + "@" + cluster) | timechart span=1d count by server Because the logs are only kept for 1 month, and in recent month, logs are only in server 127881-p@23p. So in the splunk query result, we only can see 1 column: 127881-p@23p   May I ask how to make the result has 3 columns: 127881-p@23p, 127881-p@24p, 127881-p@25p And since there is no logs in 24p and 25p rencently, the values for 24p and 25p are 0.   Thanks a lot!  
Hi All, I have written a macro to get a field. It has 3 joins. When i used the macro in dashboard , in a base search, it is not working properly and gives very less results. But when i use the macr... See more...
Hi All, I have written a macro to get a field. It has 3 joins. When i used the macro in dashboard , in a base search, it is not working properly and gives very less results. But when i use the macro in search bar it gives correct results. Does anyone know how can i solve this?
Hello, In my Splunk web service, we have the domain, for example: https://splunksh.com  The problems is anyone can get access to https://splunksh.com/config without login. Although the page doesn't... See more...
Hello, In my Splunk web service, we have the domain, for example: https://splunksh.com  The problems is anyone can get access to https://splunksh.com/config without login. Although the page doesn't contain any sensitive data, our Cyber Security team deem it as a vulnability that need to be fix. I want to know how to either disable that url, or redirect it to the login page. Any help would be very apriciate. 
Hello everyone, I have collected some firewall traffic data: two firewalls(fw1/fw2), each has two interfaces(ethernet1/1&2),  will collect rxbytes and txbytes every 5 minutes.  The raw data is sh... See more...
Hello everyone, I have collected some firewall traffic data: two firewalls(fw1/fw2), each has two interfaces(ethernet1/1&2),  will collect rxbytes and txbytes every 5 minutes.  The raw data is showed as below: >>> {"timestamp": 1726668551, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59947791867743, "txbytes": 37019023811192} {"timestamp": 1726668551, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63755935850903, "txbytes": 32252936430552} {"timestamp": 1726668551, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668551, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668851, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59948210937804, "txbytes": 37019791801583} {"timestamp": 1726668851, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63755965708078, "txbytes": 32253021060643} {"timestamp": 1726668851, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668851, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669151, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59948636904106, "txbytes": 37020560028933} {"timestamp": 1726669151, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756002542165, "txbytes": 32253111011234} {"timestamp": 1726669151, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669151, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669451, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949094737896, "txbytes": 37021330717977} {"timestamp": 1726669451, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756101313559, "txbytes": 32253199085252} {"timestamp": 1726669451, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669451, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669752, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949550987330, "txbytes": 37022105630147} {"timestamp": 1726669752, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756167141302, "txbytes": 32253286546113} {"timestamp": 1726669752, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669752, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726670052, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949968397016, "txbytes": 37022870539739} {"timestamp": 1726670052, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756401499253, "txbytes": 32253380028970} {"timestamp": 1726670052, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726670052, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} <<< Now I need to create one chart to show the value of "rxbytes" over time, with 4 series: (series 1) fw1, interface1/1 (series 2) fw1, interface1/2 (series 3) fw2, interface1/1 (series 4) fw2, interface1/2 But I have problem to compose the SPL statement for this purpose. can you please help here? thank you in advance!
I want to count user_ids that appear more than once per month. (ie a user that has used the product multiple times).  I've tried a few variations such as : search XXX | dedup XXX | stats count by u... See more...
I want to count user_ids that appear more than once per month. (ie a user that has used the product multiple times).  I've tried a few variations such as : search XXX | dedup XXX | stats count by user_id | where count >1 but can't seem to get it to work. Hoping to be able to display the count as a single number as well as timechart it so I can show the number over the last X months.. Any suggestions? It feels like it should've been easier than it has been!
Hello  I have a requirement to show one of my Splunk Cloud dashboards embedded on SharePoint on-prem page. Trying to use iframe for that purpose but get an error "Connection Refused". Any ideas... See more...
Hello  I have a requirement to show one of my Splunk Cloud dashboards embedded on SharePoint on-prem page. Trying to use iframe for that purpose but get an error "Connection Refused". Any ideas, or anyone has tried this?
Both pages for Inputs and Configuration are not working on the Palo Alto Networks Add-on.  I am using a Splunk Enterprise Standalone with an NFR 50GB license. I've never had problems with this TA be... See more...
Both pages for Inputs and Configuration are not working on the Palo Alto Networks Add-on.  I am using a Splunk Enterprise Standalone with an NFR 50GB license. I've never had problems with this TA before, but recently I've redone my test environment to migrate from a CentOS to RHEL, so I reinstalled Splunk with the latest version and all apps on their latest versions as well. Here are the errors:   What am I doing wrong to get these errors?   
Are there any plans to support this app on cloud.