All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

HI, I'm trying to create a stream for CloudWatch Logs under Splunk Cloud Web, but it is not streaming to the sourcetype/index i have setup. Found out under our Splunk HF, it's already streaming this... See more...
HI, I'm trying to create a stream for CloudWatch Logs under Splunk Cloud Web, but it is not streaming to the sourcetype/index i have setup. Found out under our Splunk HF, it's already streaming this CW Logs (from S3 directly), but with default configuration (index=aws sourcetype=aws:cloudwatchlogs <resource_id>), It's possible to customize it from the HF ? the "aws_cloudwatch_logs_tasks.conf" is empty.  *OBS: From https://community.splunk.com/t5/All-Apps-and-Add-ons/Why-are-some-AWS-CloudWatch-logs-not-appearing-in-Splunk/m-p/237123 , @jzhong_splunk   answer, if using HF, i would need to raise ticket, why? 
Hi all, So, our license expired, and before the new license was uploaded there was a 45-day gap. Now I know the search is blocked but indexing continues and Splunk will revert to its trial license ... See more...
Hi all, So, our license expired, and before the new license was uploaded there was a 45-day gap. Now I know the search is blocked but indexing continues and Splunk will revert to its trial license daily index volume.   1. My question is does Splunk still index all logs coming in? 2. Logs that were not indexed during that time can they be reindexed or what happens to them?   Thanks   
I have several alerts set up for a series of events. When an alert fires I want to log it to a new index. The problem when I do that is the new index isn't an index of the original events, but rather... See more...
I have several alerts set up for a series of events. When an alert fires I want to log it to a new index. The problem when I do that is the new index isn't an index of the original events, but rather of the alerts. So I'm trying to use the "Log Event" action in the alert to add it to the index and at the same time use the "Event" field of the "Log Event" action to include information about the offending event in JSON format with token variables from the $result$. So my Event field is literally:        {"field1":"$result.value1$", "field2":"$result.value2$", ...}       This was "kind of" working but some fields were causing issues with the JSON parsing if they included line breaks, etc. I started tinkering with the props.conf file, as well as the alert query string to try and do some pre-processing to the data to prevent it from breaking JSON parsing but now it is just completely broken and I don't understand why. I've reverted the alerts to be working as they always were. My props.conf file is stripped down to simply include: "[my_sourcetype] INDEXED_EXTRACTIONS=json TRUNCATE=0 SHOULD_LINEMERGE=true" I have tried with and without KV_MODE. Sometimes it works, sometimes it doesn't.  At this point I'm a little bit lost on what to try and or explore. Maybe there's a different approach to what I'm trying to do? I want Splunk to treat the data as fields so I can reference them directly from SOAR / Phantom. Any and all ideas welcome. Thank you!
I am creating a dashboard for reporting and one of the values of my search is called 'start date' when I check the column of 'start date' in the Search app it shows the date format like this 2022-07-... See more...
I am creating a dashboard for reporting and one of the values of my search is called 'start date' when I check the column of 'start date' in the Search app it shows the date format like this 2022-07-10 which is what I want.   When I put the Query into a Splunk dashboard the field value shows like this for all values in the column. 2022-07-10t18:00:00-06:00 Question: How can I have the query show just Year-Month-Day and remove the rest of what is showing above?  
hai all, is any way to add citrix application date into splunk. please let me know if any Addon to add from  citrix receiver.  
10-13-2022 19:05:01.052 +0800 ERROR sendmodalert [20016 AlertNotifierWorker-0] - action=twilio - Execution of alert action script failed 10-13-2022 19:05:01.052 +0800 ERROR sendmodalert [20016 Alert... See more...
10-13-2022 19:05:01.052 +0800 ERROR sendmodalert [20016 AlertNotifierWorker-0] - action=twilio - Execution of alert action script failed 10-13-2022 19:05:01.052 +0800 ERROR sendmodalert [20016 AlertNotifierWorker-0] - Error in 'sendalert' command: Alert script execution failed. 10-13-2022 19:05:01.052 +0800 ERROR SearchScheduler [20016 AlertNotifierWorker-0 ] - Error in 'sendalert' command: Alert script execution failed., search='sendal ert twilio results_file="/opt/splunk/var/run/splunk/dispatch/scheduler__ SMS Alerting from Splunk with Twilio | Splunk https://www.baboonbones.com/php/markdown.php?document=twilio_alert/README.md   I have followed all  the documentation for this but getting error .. please help    
I've been able to deploy universal forwarders to dozens of Windows servers that run IIS logs. I have created a dedicated index and I have pushed an app (used to be Splunk supported, they have since m... See more...
I've been able to deploy universal forwarders to dozens of Windows servers that run IIS logs. I have created a dedicated index and I have pushed an app (used to be Splunk supported, they have since moved to a different app package) to said forwarders. The forwarders are set to send the data to our indexer cluster. To cover my bases for the different versions I have included several different monitor stanzas in the inputs.conf file:     [monitor://C:\inetpub\logs\...\W3S*\*.log] disabled = false sourcetype = ms:iis:auto index=iis [monitor://C:\inetpub\logs\*\W3S*\*.log] disabled = false sourcetype = ms:iis:auto index=iis [monitor://C:\Program Files\Microsoft\Exchange Server\V*\Logging\Ews] disabled = false sourcetype = ms:iis:auto index=iis     When deployed to the dozens of servers, I'm not seeing any data come back up or even any path watches coming back when searching the logs coming back from the universal forwarders. As a test I have added several files to a dedicated server and kept playing around with the monitor stanzas with no luck. When opening the inputs.conf locally on that server in notepad, the text looked merged so I added some spaces and line breaks. Restarted the service, I can path watches added but still nothing coming in. Even when specifying a path to a file, nothing comes in:     [monitor://C:\Test\logs\LogFiles\W3SVC1\u_ex221010.log] disabled = false sourcetype = ms:iis:auto index=iis     For something that seems so simple, where am I going wrong?
I have the data has "1111|xxx, xxx y|000000|111111|firstname, lastname|10/13/22 02:12:09|" I used TIME_FORMAT = %m/%d/%Y %H:%M:%S and Timestamp prefix = ^(?:[^\|\n]*\|){5} However, I still get ... See more...
I have the data has "1111|xxx, xxx y|000000|111111|firstname, lastname|10/13/22 02:12:09|" I used TIME_FORMAT = %m/%d/%Y %H:%M:%S and Timestamp prefix = ^(?:[^\|\n]*\|){5} However, I still get an error stating could not use strptime to parse the timestamp. Would need help in providing timestamp prefix here.
Hi, I want to use Splunk, but not sure where to start, i am new to it. I have a situation where in, I have a log file that has all sort of logs, say category1 catergory2 and category3 etc. logs.... See more...
Hi, I want to use Splunk, but not sure where to start, i am new to it. I have a situation where in, I have a log file that has all sort of logs, say category1 catergory2 and category3 etc. logs. I have dedicated regex parsers for each category say parser1 parser2 and parser3. One single log line would match to one of the parser only. If there is no suitable parser i.e. no match found, the line is not eligible to be indexed. I want it all to happen before indexing. The log source could be either a log file or a stream of logs. Can someone help me on how to parse the whole log file and get each line parsed and indexed in one single index, say myidx? I understand I will have to deploy props.conf and transforms.conf, but how to configure these files to achieve this. Please help or suggest better way. TIA sample log lines. 1. Sep 01 23:43:47 test_device001 test_device001 default default-log [test_domain][0x0001][mp][alert] mp(Rrocessor): trans(53)[request][109.2.x.z] gtid(127d3b333052): event((test.xsl) Transaction Initiated) TestURI(my/mapped/url) Size(0) Node((test_domain)) userID(test_uid) 2. Sep 05 23:43:47 test_device001 test_device001 default default-log [test_domain][0x0001][mp][alert] mp(Rrocessor): trans(53)[request][109.2.x.z] gtid(127d3b33305): (set-client-idy-head.xsl)*** P O N O D E T<entry><url event((test.xsl) Transaction Initiated) TestURI(my/mapped/url) <http-method>GET</http-method> 3. Sep 04 23:43:47 test_device001 test_device001 default default-log [test_domain][0x0001][mp][alert] mp(Rrocessor): trans(53)[request][109.2.x.z] gtid(127d3b333052): *** NODETYPE(SS) ***FLOW(HTTP{->HTTP) ***OUTG(mysite.test.com)  
Hi, I'm starting with ES Threat Intelligence and am wondering, how threat intel data is populated to the KV stores used in the correlation search "Threat Activity Detected". As a simple example ... See more...
Hi, I'm starting with ES Threat Intelligence and am wondering, how threat intel data is populated to the KV stores used in the correlation search "Threat Activity Detected". As a simple example I manually added an entry to local_email_intel (which is of course enabled). Now I'm expecting the email address to appear in the KV store threatintel_by_email, which is used in the threat matching search for email. But threatintel_by_email is still empty, although I waited for a while for background jobs. I can't find the entered email address in the Threat Artifacts dashboard as well. What is my mistake here? What kind of background job do we need/wait for to make my entry available for threat detection? Thanks in advance  
Hi all, I have a single value visualisation added in a dashboard. Its background colour depends on the value shown. (Green for 'Pass' and red for 'Fail'). But somehow it's always giving red backgrou... See more...
Hi all, I have a single value visualisation added in a dashboard. Its background colour depends on the value shown. (Green for 'Pass' and red for 'Fail'). But somehow it's always giving red background eventhough the value is 'Pass'. Here is the code I use: ``` <panel depends="$hide_css$"> <html> <style> #verdict rect { fill: $verdict_background$ !important; } #verdict text { fill: $verdict_foreground$ !important; } </style> </html> </panel> <panel> <single id="verdict"> <search> <query>index=temp_index | search splunk_id=$splunk_id$ | eval ver = verdict.$campaigns_included$ | table verdict.$campaigns_included$ </query> <done> <eval token="verdict_background">if($result.ver$=="Pass", "green", "red")</eval> <set token="verdict_foreground">black</set> </done> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="height">60</option> <option name="rangeColors">["0x53a051","0xdc4e41"]</option> <option name="rangeValues">[0]</option> <option name="useColors">1</option> </single> </panel> ``` $campaigns_included$ is the value that's chosen on a dropdown. Pls help, any help would be appreciated. @bowesmana requesting for expertise here!
Hello, I have been building a dashboard in dashboard studio and was looking for some help wrt implementing the fields option in XML dashboard in dashboard studio If my panel in XML dashboard has ... See more...
Hello, I have been building a dashboard in dashboard studio and was looking for some help wrt implementing the fields option in XML dashboard in dashboard studio If my panel in XML dashboard has like 5 fields and I want to display only 3 in the table, we can use this <fields>[field1,field2,field3]</fields> However when I download the panel results, I will still be able to view all the 5 fields with this   I am looking to do something same in dashboard studio and I am not able to get the functionality. I have tried using header option in Table option and didn't get any success.   { "type": "splunk.table", "options": { "tableFormat": { "headerBackgroundColor": "#0E6162", "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)", "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)", "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)" }, "backgroundColor": "#ffffff", "table": "> [\"field1\",\"field2\"]", -> didnt work "headers": "> table | getField([\"field1\",\"field2\"])", -> didnt work "showInternalFields": false }, "dataSources": { "primary": "ds_TlbXBz5i" }, "context": {}, "showProgressBar": false, "showLastUpdated": false }
Hi all, I have a table with these fields -   Time/Date Key Robot Process Host Status Environment Type 01.01.2022 12:30:00 Key 1 Key 2 Key 3 Robot 1 Robot 2 Robot 3 Pr... See more...
Hi all, I have a table with these fields -   Time/Date Key Robot Process Host Status Environment Type 01.01.2022 12:30:00 Key 1 Key 2 Key 3 Robot 1 Robot 2 Robot 3 Process Claim Process Claim Process Claim Host W Host X Host Y Success Success Success Production Production Production Critical Critical Critical 01.01.2022 12:30:00 Key 4 Robot 4  Process Refund Host Z Success Production Critical 02.02.2022 11:30:00 Key 5 Robot 5 Process Tax Host V Failed Non-Production Minor   I want to split up the first row into three rows to show the data. Is there a way I can split these based on the Time/Date and the Process?  Ideally, I want to generate a new table like this - Time/Date Key Robot Process Host Status Environment Type 01.01.2022 12:30:00 Key 1     Robot 1     Process Claim   Host W   Success   Production   Critical   01.01.2022 12:30:00 Key 2 Robot 2 Process Claim Host X Success Production Critical 01.01.2022 12:30:00 Key 3 Robot 3 Process Claim Host Y Success Production Critical 01.01.2022 12:30:00 Key 4 Robot 4  Process Refund Host Z Success Production Critical 02.02.2022 11:30:00 Key 5 Robot 5 Process Tax Host V Failed Non-Production Minor    
Hi, I have a lot of event data, where every instance can be idendified by a unique ID. Every instance contains several activities. Some activities occur not only once. For some this is okay, but for... See more...
Hi, I have a lot of event data, where every instance can be idendified by a unique ID. Every instance contains several activities. Some activities occur not only once. For some this is okay, but for others I would like to add e.g. a "_2" at the end of the activity name for the second occurence of this activity. As this should be performed only for the second activity within the instance and only for some activities within all, I was not sure if it is possible to transform the data with SPL in the way I need it to be.   Thanks for your support!
Hi, I have multiple syslog collectors (practically a heavy forwarder that picks up logs from disk). I am struggling to find a way of setting a specific sourcetype for parts of this logs that are ... See more...
Hi, I have multiple syslog collectors (practically a heavy forwarder that picks up logs from disk). I am struggling to find a way of setting a specific sourcetype for parts of this logs that are picked up from disk. /data/syslog/ contains thousands of folders with IP adresses, and i want to set a specific sourcetype for lets say 100 of them... Ive tried using regex and whitelist, but it seems like two stanzas with the same name wont work:     [monitor:///data/syslog/tcp/.../*.log] sourcetype = rsyslog host_segment = 4 index = xxx_syslog blacklist = .*\.gz$ [monitor:///data/syslog/tcp/.../*.log] sourcetype = vmw-syslog host_segment = 4 index = xxx_syslog blacklist = .*\.gz$ whitelist = \/data\/syslog\/tcp\/(10\.21[1289]\.75\.\d+|10\.143\.15\.\d+|10\.21[01]\.70\.\d+|10\.250\.191\.50|10\.30\.221\.19[1-2]|11\.36\.1[128]\.\d+|10\.37\.12\.\d+|10\.45\.[12]\.\d+|10\.6[23]\.12.\d+|10\.63\.10\.20|10\.67\.(0|64)\.\d+|10\.67\.67\.67)\/     Any idea on how i can set an sourcetype using REGEX? (I can not rewrite the sourcetype on a heavy forwarder, because this data should be parsed and get a new sourcetype from an TA app (vmware esxilogs), and i cant parse data two times).
Hello, I have logs containing two fields "account" and "shard".  By doing "| table account shard" I created a table of two cols  and since the table can have repeating values like: account    ... See more...
Hello, I have logs containing two fields "account" and "shard".  By doing "| table account shard" I created a table of two cols  and since the table can have repeating values like: account                  shard 100                           21 100                           21 100                           8 101                           10 I did "| stats dc(shard) by account", which gives me: account                 shard 100                          2 101                          1 I have two such tables(before and after) of "account" vs "dc(shard)" and I want to compare them(get the diff in distinct no of shards for each account before and after), but struggling to do this.  Please guide me to get the result.  [I can explain anything thats unclear]
on 11th October we had 5 events, but we received only 2 email notification.   Below the 5 events of the alert for Yesterday (11th Oct)   1            2022-10-11 23:30:04 BST             View ... See more...
on 11th October we had 5 events, but we received only 2 email notification.   Below the 5 events of the alert for Yesterday (11th Oct)   1            2022-10-11 23:30:04 BST             View Results 2            2022-10-11 23:00:05 BST             View Results 3            2022-10-11 22:30:04 BST             View Results 4            2022-10-11 22:00:02 BST             View Results 5            2022-10-11 14:00:02 BST             View Results   But we received email notification only for 1st and 5th event. No email notification for 2nd 3rd and 4th. Could please help us for this discrepancy since we had Client impact and caused so many transactions failures and for issues event was generated but email was not trigged. Can help me how to resolve this issue Thank you, Veeru
Hi,   we are receiving logs from UF and syslog and now we have a request for forwarding particular raw windows event to another syslog server. Anybody have experience with something like this? 
I have below events/messages in my search result. There are 2 fields stack_trace and TYPE like below. I want to group the events and count them as shown below based on a particular text from stack_tr... See more...
I have below events/messages in my search result. There are 2 fields stack_trace and TYPE like below. I want to group the events and count them as shown below based on a particular text from stack_trace and TYPE field as below. Is it possible to group the messages based on 2 fields (TYPE,stack_trace)? I am using below query but I am stuck as to how to group by 2 fields.  Event 1     { TYPE: ABCD stack_trace : com.abc.xyz.package.ExceptionName: Missing A. at random.package.w(DummyFile1:45) at random.package.x(DummyFile2:64) at random.package.y(DummyFile3:79) }       Event 2     { TYPE: XYZ stack_trace : com.abc.xyz.package.ExceptionName: Missing B. at random.package.w(DummyFile1:45) at random.package.x(DummyFile2:64) at random.package.y(DummyFile3:79) }       Expected Output     TYPE Exception Count ABCD Missing A 3 ABCD Missing B 4 XYZ Missing A 6 XYZ Missing B 1       Query I am using but incomplete      BASE_SEARCH | rex field= _raw "Exception: (?<Exception>[^\.\<]+)" | stats count as Count by "Exception"       Actual Output     Exception Count Missing A 3 Missing B 4 Missing c 6      
Hello I'm working on the setup of the alert when the disk space usage reaches above 80. However, I don't how to change in the query the partition that we need to check. The main partition is in... See more...
Hello I'm working on the setup of the alert when the disk space usage reaches above 80. However, I don't how to change in the query the partition that we need to check. The main partition is installed the Splunk service, however, i want to set the alert for another partition, the one that stores the logs. Here is the search for the alarm: | rest splunk_server_group=dmc_group_* /services/server/status/partitions-space | eval free = if(isnotnull(available), available, free) | eval usage = capacity - free | eval pct_usage = floor(usage / capacity * 100) | where pct_usage > 30 | stats first(fs_type) as fs_type first(capacity) AS capacity first(usage) AS usage first(pct_usage) AS pct_usage by splunk_server, mount_point | eval usage = round(usage / 1024, 2) | eval capacity = round(capacity / 1024, 2) | rename splunk_server AS Instance mount_point as "Mount Point", fs_type as "File System Type", usage as "Usage (GB)", capacity as "Capacity (GB)", pct_usage as "Usage (%)" And the result for the search:   And the partition that we need to monitoring is the next one: Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg00-root 1014M 84M 931M 9% / /dev/mapper/vg00-usr 4.0G 1.8G 2.3G 45% /usr /dev/sda1 1014M 192M 823M 19% /boot /dev/mapper/vg00-opt 10G 6.3G 3.8G 63% /opt /dev/mapper/vg01-splunk 32G 15G 18G 47% /var/log/splunk How can I change the query, so the search is done on the last partition? Regards!