All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi I transpose header field time like this     | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime _events | fillnull value=0 | transpose header_field=time 0 colu... See more...
hi I transpose header field time like this     | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime _events | fillnull value=0 | transpose header_field=time 0 column_name=KPI include_empty=true | sort KPI     Now I need to display only the fields for which _time is < to the current time So I am doing this and it works     | where _time < now()      But I also need to disply only the fields an hour earlier to the current time So I need something like this but I dont succeed     | where _time < now() AND _time > now()-1     Could you help please?
Hi Splunkers, for our environments, I needed a custom parser for some waf logs, so I created an addon to provide this. The addon has been created on a local Splunk istance on my Desktop; once compl... See more...
Hi Splunkers, for our environments, I needed a custom parser for some waf logs, so I created an addon to provide this. The addon has been created on a local Splunk istance on my Desktop; once completed and tested, it has been loaded on our Splunk Cloud istance, where it has Global permissions. The point is the following: the addon, once installed on cloud, correctly parse the logs and perform field extraction as desidered, coerently with results got on local istance; also, the events are correctly tagged with "attack" and "ids" as desired, due we want to see those events on Data Model Intrusion Detection. Unfortunately, when I try to perform a search with Intrusion Detection DM, the events are not present; a simple search like   |tstats summariesonly=true fillnull_value="N/D" count from datamodel=Intrusion_Detection by sourcetype   does not show me, in output, the sourcetype created during addon creation. I followed the usual way I use to create addon Data Model matching, which is: 1. create a eventtype in eventtypes.conf with syntax:   [<eventtype name>] search = <sourcetype> <parameters list>   2. use the above eventtype in tags.conf for tagging, with syntax   [eventtype=<eventtype name>] attack=enabled ids=enabled   If permissions are ok, what could be the root cause?
Lookup table fields that contain < or > symbols are getting escaped to &amp;gt; and &amp; lt;  How can I prevent this from occurring? It only happens when manipulating the field value using lookup e... See more...
Lookup table fields that contain < or > symbols are getting escaped to &amp;gt; and &amp; lt;  How can I prevent this from occurring? It only happens when manipulating the field value using lookup editor.  For example:  servicecode>="200" AND servicecode<400,1,0), failed=if(servicecode>400,1,0) gets rewritten as:  &amp;amp;gt;="200" AND servicecode&amp;amp;lt;400,1,0), failed=if(servicecode&amp;amp;gt;400,1,0)
Hi all, I have 2 panels, I can call it like panel1 and panel2. The panel2 is detail of a value in panel1. And I didn't user post-process type. This is 2 individual panels. The problem is the pane... See more...
Hi all, I have 2 panels, I can call it like panel1 and panel2. The panel2 is detail of a value in panel1. And I didn't user post-process type. This is 2 individual panels. The problem is the panel2 search different events, it missed the time to search.  So how i fix it? Panel1 with drilldown token ipDownload       <search> <query>index=... | fields SrcIP, DownSize | chart sum(DownSize) as Download by SrcIP | sort 10 -Download</query> <earliest>$Time.earliest$</earliest> <latest>$Time.latest$</latest> <sampleRatio>1</sampleRatio> </search>       Panel2       <search> <query>index=... | search SrcIP="$ipDownload$" | stats sum(DownSize) as Download by DstIP Client AppProtocol | sort 10 -Size | table DstIP, Client, AppProtocol, Download </query> <earliest>$Time.earliest$</earliest> <latest>$Time.latest$</latest> <sampleRatio>1</sampleRatio> </search>          
We have to filter the data which has Result=pass, status=200 and send the other logs to Splunk. we have received the logs to splunk before adding props.conf and transforms.conf. we have the following... See more...
We have to filter the data which has Result=pass, status=200 and send the other logs to Splunk. we have received the logs to splunk before adding props.conf and transforms.conf. we have the following configuration in props.conf & transforms.conf.  /opt/splunk/etc/apps/TA-AlibabaCloudSLS/default/transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = result\=200 DEST_KEY = queue FORMAT = indexQueue [cloudnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [cloudparsing] REGEX = result\=pass DEST_KEY = queue FORMAT = indexQueue   /opt/splunk/etc/apps/TA-AlibabaCloudSLS/default/props.conf [alibaba:cloudfirewall] TRANSFORMS-set= cloudnull,cloudparsing [alibaba:waf] TRANSFORMS-set= setnull,setparsing   But we are not receiving any logs to splunk for this although there are logs in alibaba cloud. Below is the inputs.conf file   /opt/splunk/etc/apps/TA-AlibabaCloudSLS/local/inputs.conf [sls_datainput://Alibaba_Cloud_Firewall] event_retry_times = 0 event_source = alibaba:cloudfirewall event_sourcetype = alibaba:cloudfirewall hec_timeout = 120 index = ***** interval = 300 protocol = private sls_accesskey = ***** sls_cg = ****** sls_cursor_start_time = end sls_data_fetch_interval = 1 sls_endpoint = ******* sls_heartbeat_interval = 60 sls_logstore = ***** sls_max_fetch_log_group_size = 1000 sls_project = ******* unfolded_fields = {"actiontrail_audit_event": ["event"], "actiontrail_event": ["event"] }   [sls_datainput://Alibaba_waf] event_retry_times = 0 event_source = alibaba:waf event_sourcetype = alibaba:waf hec_timeout = 120 index = ***** interval = 300 protocol = private sls_accesskey = ****** sls_cg = ******* sls_cursor_start_time = end sls_data_fetch_interval = 1 sls_endpoint = **** sls_heartbeat_interval = 60 sls_logstore = ***** sls_max_fetch_log_group_size = 1000 sls_project = **** unfolded_fields = {"actiontrail_audit_event": ["event"], "actiontrail_event": ["event"] }
Hi All, I have 2 different queries and I want to combine their results. These 2 queries return a single value output I want these 2 values in the same search result. Thanks for any help.     ... See more...
Hi All, I have 2 different queries and I want to combine their results. These 2 queries return a single value output I want these 2 values in the same search result. Thanks for any help.     index=“abc” (TYPE="Run bot finished" OR TYPE="Run bot Deployed") | search STATUS=Successful TYPE="Run bot finished" | stats count |rename count as Success_Count index = “abc” RPAEnvironment = "prd" ProcessName = "*" LogType = "*" TaskName = "*Main*" (LogLevel=ERROR OR LogLevel=FATAL) | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") | eval LogDescription = trim(replace(LogDescription, "'", "")) | eval LogMessage = trim(replace(LogMessage, "'", "")) | eval TaskName = trim(replace(TaskName, "'", "")) | eval host=substr(host,12,4) | eval Account=if(User!= "" ,User,LoginUser) | table Time, LogNo, host, Account, LogType, LogMessage, TaskName ,ProcessName | rename LogMessage as "Log Message", TaskName as "Task Name", host as "VDI" | sort - Time|stats count|rename count as Failure_Count                    
Hi I need to do a timechart from a single panel result In this single panel, I stats events like this   | stats count as PbPerf by s | search PbPerf>10 | stats dc(s)   The results of thi... See more...
Hi I need to do a timechart from a single panel result In this single panel, I stats events like this   | stats count as PbPerf by s | search PbPerf>10 | stats dc(s)   The results of this search is 14 events Now I need to timechart these 14 events So I am doing this   | bin _time span=1d | stats count as PbPerf by s _time | search PbPerf>10 | timechart count span=1h    The first problem I have is that I want to retrieve the 14 events before doing the timechart is that I have to use a span=1d But of course all the 14 events are grouped with the same _time even if I use a span=1h in the timechart So how to display a timechart that display a _time value for my 14 events? Thanks
Hi I need to compare the results of 2 single panel between 2 different dates The first single panel concerns the results of the current day in the last 15 minutes and consists in a basic count ... See more...
Hi I need to compare the results of 2 single panel between 2 different dates The first single panel concerns the results of the current day in the last 15 minutes and consists in a basic count   | stats dc(s)   In the second single panel, I need to do the same count but for one week before but also in the last 15 minutes compared to the current time Is it possible to do such a thing? Thanks
Hi Team, I have a dashboard like below: what happens in my dashboard I have only 2 columns with 6 panels. All those 2 column name are "index" "Current status" like 6 panels I have it. Both 2 colum... See more...
Hi Team, I have a dashboard like below: what happens in my dashboard I have only 2 columns with 6 panels. All those 2 column name are "index" "Current status" like 6 panels I have it. Both 2 columns data is returning are as different but column name are same. Since 2 column name "index" "current status" are returning 6times in 6 different panels in single dashboard. I need to know, how do I make "index" and "Current status" column to be return only 1 time. Please suggest,  Below the dashboard sample results:    
Hello, I would like to copy paste my app dashboards, say from the app  A/local/data/ui/views folder to the corresponding backup app, say A_backup/../views several times a day adding the timestamp to... See more...
Hello, I would like to copy paste my app dashboards, say from the app  A/local/data/ui/views folder to the corresponding backup app, say A_backup/../views several times a day adding the timestamp to the dashboard name. The goal is to give developers the possibility to come back to their coding from like 3 hrs back. What do I need to take into consideration for that? I mean I would like to avoid restarting my splunk in-between to make the changes visible of course.  The developers should be able to access the A_backup and see their versioned dashboards by the corresponding name. I know, there are perhaps better ways (github app) for that, but I would like to keep it simple as that. I made a test with copy paste of one .xml file within the same app, but it is not visible in the UI, so I guess I miss some parts here. Can anyone help with the above? Kind Regards, Kamil
Hey everyone. Need some help breaking a json event that is ingested in the current nested json format: [ { "title": "Bad Stuff", "count": 2, "matches": [ { "EventID"... See more...
Hey everyone. Need some help breaking a json event that is ingested in the current nested json format: [ { "title": "Bad Stuff", "count": 2, "matches": [ { "EventID": 13, "EventRecordID": 19700, "User": "NT AUTHORITY\\SYSTEM" }, { "EventID": 16, "EventRecordID": 21700, "User": "NT AUTHORITY\\ADMIN" } ] }, { "title": "Next Bad Stuff", "count": 2, "matches": [ { "EventID": 14, "EventRecordID": 19700, "User": "NT AUTHORITY\\SYSTEM" }, { "EventID": 17, "EventRecordID": 21700, "User": "NT AUTHORITY\\ADMIN" } ] } ]   Would like to break it into seperate events like this: { "title": "Bad Stuff", "count": 2, "EventID": 13, "EventRecordID": 19700, "User": "NT AUTHORITY\\SYSTEM" } { "title": "Bad Stuff", "count": 2, "EventID": 16, "EventRecordID": 21700, "User": "NT AUTHORITY\\ADMIN" } { "title": "Next Bad Stuff", "count": 2, "EventID": 14, "EventRecordID": 19700, "User": "NT AUTHORITY\\SYSTEM" } { "title": "Next Bad Stuff", "count": 2, "EventID": 17, "EventRecordID": 21700, "User": "NT AUTHORITY\\ADMIN" }   What would I need in my props.conf and transforms.conf to achieve this ?   Thanks in advanced splunk community ! Cheers.
Hi to all, my Splunk architecture consist of: 4 SH, 2 Indexer, 1 Deployment-Server (includes Cluster Master and Deployer).   I need to install an heavy forwarder but I don't have availables mac... See more...
Hi to all, my Splunk architecture consist of: 4 SH, 2 Indexer, 1 Deployment-Server (includes Cluster Master and Deployer).   I need to install an heavy forwarder but I don't have availables machines; where is better to install a second Splunk Enterprise instance (Heavy Forwarder)?   Thanks to all.
Hi, I want to know the list of event types and attributes used for ADQL queries. Thank you, Hemanth Kumar.
Hi,  I am creating a Dashboard panel via XML Classic method.  My query is quite straightforward as shown below. Issue is,  the panel is displaying all the results despite my xml code having count s... See more...
Hi,  I am creating a Dashboard panel via XML Classic method.  My query is quite straightforward as shown below. Issue is,  the panel is displaying all the results despite my xml code having count set to 5 ? Any idea why is it doing so and how to make Splunk limit results as per the count ? <title>Top 5 Countries Last 24 hours</title> <table> <search> <query>index=aws sourcetype="aws:waf" "httpRequest.country"!="-" | stats count by httpRequest.country </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="count">5</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> Result    
Hello, I am recieving the following warning in my app :   I have updated my app by the last option in this article :  https://www.splunk.com/en_us/blog/tips-and-tricks/html-dashboards-deprecat... See more...
Hello, I am recieving the following warning in my app :   I have updated my app by the last option in this article :  https://www.splunk.com/en_us/blog/tips-and-tricks/html-dashboards-deprecation.html The final option is to move your HTML files from /data/ui/html to /appserver/static/template and add a Single Page Application (SPA) view that specifies <view template="app-name:/static/template/path" type="html">.  But i still get the warning. There may be something wrong at my code but i dont know it. Please some one help !
Hi  I have for each event the open_time and update_time, I want to calculate the age of the event, like:  open_time               update_time           age 2022-03-26            2022-04-26 ... See more...
Hi  I have for each event the open_time and update_time, I want to calculate the age of the event, like:  open_time               update_time           age 2022-03-26            2022-04-26            1m 2022-04-22            2022-04-26             4d   any idea ? thanks
I have to prepare reporting dashboards in Splunk for which I used this query until now:   field1=GTIN_RECEIVED field2=NREC field3=*1234* field4=SNS NOT [search field1=MESSAGE_INVALID OR field1... See more...
I have to prepare reporting dashboards in Splunk for which I used this query until now:   field1=GTIN_RECEIVED field2=NREC field3=*1234* field4=SNS NOT [search field1=MESSAGE_INVALID OR field1=GTIN_INVALID field2=NREC OR field2=PRODUCER field3=*1234* field4=SNS | dedup field5 | fields field5 ] | dedup field5 | table field5 | rename field5 as gtin   The data size is huge now and the query takes too long to run which is becoming very difficult for me to generate dashboard.   Can someone pls help and simplify this query so that it takes minimal time.
Hello Splunk Community, I am facing this issue and was hoping if anyone could help me: In the Splunk datamodel, for the auto-extracted fields, there are some events whose fields are not being ext... See more...
Hello Splunk Community, I am facing this issue and was hoping if anyone could help me: In the Splunk datamodel, for the auto-extracted fields, there are some events whose fields are not being extracted. Majority of the events have their fields extracted but there are some 10-15 events whose fields are not being extracted properly. Any suggestions/ideas as to what is causing this discrepancy? Thanks!
I have a Threat Intelligence search that I would like to filter on based on results, so the scenario is if the Threat Activity is matched in the Network_Traffic datamodel then based on action = (allo... See more...
I have a Threat Intelligence search that I would like to filter on based on results, so the scenario is if the Threat Activity is matched in the Network_Traffic datamodel then based on action = (allowed, dropped or blocked) then the action should only send me the allowed traffic and filter out dropped or blocked traffic.      | from datamodel:"Threat_Intelligence"."Threat_Activity" | search NOT [| inputlookup local_intel_whitelist.csv | fields threat_collection_key, dest | table threat_collection_key, dest | format "(" "(" "OR" ")" "OR" ")" ] | append [| map search="search index=netfilter $threat_match_value$" | eval threat_action_value="found" | eval action="*" ] - this is the line I added. | dedup threat_match_field,threat_match_value | `get_event_id` | table _raw,event_id,source,src,dest,threat*,weight, orig_sourcetype, action | rename weight as record_weight | `per_panel_filter("ppf_threat_activity","threat_match_field,threat_match_value")` | `get_threat_attribution(threat_key)` | rename source_* as threat_source_*,description as threat_description | eval risk_score=case(isnum(record_weight), record_weight, isnum(weight), weight, 1=1, null()) | fields - *time | eval risk_object_type=case(threat_match_field="query" OR threat_match_field=="src" OR threat_match_field=="dest","system",threat_match_field=="src_user" OR threat_match_field=="user","user",1=1,"other") | eval risk_object=threat_match_value | dedup dest | eval urgency=if(threat_category=="_MISP", "medium" , "high")  
I ran this search on splunk cloud web and I got the results below. Can anyone help on how to resolve   index=_internal source=*/splunkforwarder/var/log/splunk/splunkd.log OR source=*SplunkUnivers... See more...
I ran this search on splunk cloud web and I got the results below. Can anyone help on how to resolve   index=_internal source=*/splunkforwarder/var/log/splunk/splunkd.log OR source=*SplunkUniversalForwarder\\var\\log\\splunk\\splunkd.log log_level=ERROR | transaction host component   1) 04-26-2022 13:27:26.944 -0700 ERROR ExecProcessor [4000 ExecProcessor] - message from ""c:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" WinEventCommonChannel - EvtDC::connectToDC: DsBind failed: (1722) 04-26-2022 13:27:26.944 -0700 ERROR ExecProcessor [4000 ExecProcessor] - message from ""c:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventLogChannel::init: Failed to bind to DC, dc_bind_time=1031 msec 04-26-2022 13:27:27.959 -0700 ERROR ExecProcessor [4000 ExecProcessor] - message from ""c:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" WinEventCommonChannel - EvtDC::connectToDC: DsBind failed: (1722) 04-26-2022 13:27:29.090 -0700 ERROR ExecProcessor [4000 ExecProcessor] - message from ""c:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" WinEventCommonChannel - EvtDC::connectToDC: DsBind failed: (1722) 04-26-2022 13:27:29.715 -0700 ERROR ExecProcessor [4000 ExecProcessor] - message from ""c:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" WinEventCommonChannel - EvtDC::connectToDC: DsBind failed: (1722)   2) 04-26-2022 09:38:13.402 -0700 ERROR TcpOutputFd [5228 TcpOutEloop] - Connection to host=1*******0.146:9997 failed 04-26-2022 09:38:43.312 -0700 ERROR TcpOutputFd [5228 TcpOutEloop] - Connection to host=1*******0.146:9997 failed 04-26-2022 09:39:13.173 -0700 ERROR TcpOutputFd [5228 TcpOutEloop] - Connection to host=1*******0.146:9997 failed 04-26-2022 09:39:43.118 -0700 ERROR TcpOutputFd [5228 TcpOutEloop] - Connection to host=1*******0.146:9997 failed 04-26-2022 09:40:12.952 -0700 ERROR TcpOutputFd [5228 TcpOutEloop] - Connection to host=1*******0.146:9997 failed 3) 04-26-2022 08:27:54.691 -0700 ERROR PipelineComponent [6004 CallbackRunnerThread] - Monotonic time source didn't increase; is it stuck?