All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @meng , I never experienced this issue, opena a case to Splunk Support, for your case and also for the other Splunk Customer. Ciao. Giuseppe
Hi @karn , you could add a columns to a lookup using the Lookup Editor app, but remember to modify also the Lookup Definition. With a csv lookup, you don't need to modify the Lookup Definition, but... See more...
Hi @karn , you could add a columns to a lookup using the Lookup Editor app, but remember to modify also the Lookup Definition. With a csv lookup, you don't need to modify the Lookup Definition, but it's required for KV-Store lookups. Ciao. Giuseppe
Have you tried to use the API to add the field? KV store endpoint descriptions - Splunk Documentation
<form version="1.1" theme="light"> <label>Akamai WAF Dashboard</label> <search id="base_search"> <query>index="waf_app_*" sourcetype=akamai_waf |fields * |search attackData.configId=$configid... See more...
<form version="1.1" theme="light"> <label>Akamai WAF Dashboard</label> <search id="base_search"> <query>index="waf_app_*" sourcetype=akamai_waf |fields * |search attackData.configId=$configid$ source=$source$ </query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <description></description> <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="configid" searchWhenChanged="true"> <label>Security Configuration ID</label> <choice value="*">All</choice> <fieldForLabel>attackData.configId</fieldForLabel> <fieldForValue>attackData.configId</fieldForValue> <search> <query>index="waf_app_*" sourcetype=akamai_waf source=$source$ | stats count by attackData.configId</query> <earliest>-5m</earliest> <latest>now</latest> </search> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="source" searchWhenChanged="true"> <label>Service Name</label> <choice value="*">All</choice> <fieldForLabel>source</fieldForLabel> <fieldForValue>source</fieldForValue> <search> <query>index="waf_app_*" sourcetype=akamai_waf attackData.configId=$configid$ |stats count by source</query> <earliest>-5m@m</earliest> <latest>now</latest> </search> <default>*</default> <initialValue>*</initialValue> </input> <input type="time" token="time"> <label>Select Time Range</label> <default> <earliest>-5m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Top 10 Attack Rule IDs</title> <chart> <search base="base_search"> <query> | top limit=10 attackData.rules{}.id | rename attackData.rules{}.id as "Rule ID"</query> </search> <option name="charting.chart">bar</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <title>Top 10 Attack Rule Tags</title> <chart> <search base="base_search"> <query> |stats count by attackData.rules{}.tag |sort - count |head 10</query> </search> <option name="charting.chart">pie</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>Rule Messages</title> <table> <search base="base_search"> <query>| stats count by attackData.rules{}.message |sort - count |head 10</query> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="wrap">true</option> </table> </panel> <panel> <title>Rule Action by Count</title> <chart> <search base="base_search"> <query> | stats count by attackData.rules{}.action |sort - count</query> </search> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">minmax</option> <option name="charting.chart.sliceCollapsingThreshold">0.05</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>Rule IDs Trend (5 min)</title> <chart> <search base="base_search"> <query> | timechart count(attackData.rules{}.id) span=5min</query> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>Status Code Trend</title> <chart> <search base="base_search"> <query> | stats count by httpMessage.status</query> </search> <option name="charting.chart">pie</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <title>Top 10 IP Addresses</title> <chart> <search base="base_search"> <query> | stats count by attackData.clientIP |sort - count |head 10</query> </search> <option name="charting.chart">bar</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>Top 10 HTTP Path Details</title> <chart> <search base="base_search"> <query> | stats count by httpMessage.path |sort - count |head 10</query> </search> <option name="charting.chart">bar</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <title>HTTP Method Count</title> <chart> <search base="base_search"> <query> | stats count by httpMessage.method |sort - count </query> </search> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.sliceCollapsingThreshold">0</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">all</option> </chart> </panel> </row> </form>
Hey @Karthikeya, The other data is coming because of improper drilldown configuration. Can you share the dashboard source code here? Make sure to share it in code block for better visibility.  Than... See more...
Hey @Karthikeya, The other data is coming because of improper drilldown configuration. Can you share the dashboard source code here? Make sure to share it in code block for better visibility.  Thanks, Tejas. 
raw data
  Dashboard panel looks in this way. But when they click on any value (eg alert), below is the data coming -  But Ideally they want to see only alert related log but remaining 2 are also coming... See more...
  Dashboard panel looks in this way. But when they click on any value (eg alert), below is the data coming -  But Ideally they want to see only alert related log but remaining 2 are also coming in log.
Hello @Karthikeya, Can you share a screenshot of how the dashboard looks right now and how it should look ideally? I believe you can use spath command to separate out each rules from the attackData ... See more...
Hello @Karthikeya, Can you share a screenshot of how the dashboard looks right now and how it should look ideally? I believe you can use spath command to separate out each rules from the attackData and use mvexpand. But to provide more context, I'll need some more information. Thanks, Tejas. 
raw data -  "attackData":{"rules":[{"data":"SCANTL=10","action":"alert","selector":"","tag":"REPUTATION","id":"REP_6021037","message":"Scanning Tools (High Threat) - Shared IPs","version":""},{"data... See more...
raw data -  "attackData":{"rules":[{"data":"SCANTL=10","action":"alert","selector":"","tag":"REPUTATION","id":"REP_6021037","message":"Scanning Tools (High Threat) - Shared IPs","version":""},{"data":"SCANTL=10","action":"alert","selector":"","tag":"REPUTATION","id":"REP_6021039","message":"Scanning Tools (Low Threat) - Shared IPs","version":""},{"data":"WEBATCK=10","action":"alert","selector":"","tag":"REPUTATION","id":"REP_6021041","message":"Web Attackers (High Threat) - Shared IPs","version":""},{"data":"WEBATCK=10","action":"alert","selector":"","tag":"REPUTATION","id":"REP_6021043","message":"Web Attackers (Low Threat) - Shared IPs","version":""}], converted to Json and here is the result -  attackData: { [-] rules: [ [-]        {  action: alert          data: SCANTL=10          id: REP_6021037          message: Scanning Tools (High Threat) - Shared IPs          selector:          tag: REPUTATION          version:        }        { [-] action: alert          data: SCANTL=10          id: REP_6021039          message: Scanning Tools (Low Threat) - Shared IPs          selector:          tag: REPUTATION          version:        }        { [-] action: alert data:WEBATCK=10 id:REP_6021041 message:Web Attackers (High Threat) - Shared IPs selector: tag:REPUTATION version:        }        { [-] action: alert          data: WEBATCK=10          id: REP_6021043          message: Web Attackers (Low Threat) - Shared IPs          selector:          tag: REPUTATION        }      ]    } Here the issue is whenever we are creating an alert or dashboard with single message called Scanning Tools (High Threat) - Shared IPs we are getting correct values but along with that rest all rules are also coming in event which client is not accepting. I know that will be there bcoz thats how the log is. Can we do anything for this to get only given message or value not all. This is happening for all events.  
Hi fsoengn, Thanks for the reply. Mine is AppDynamics SaaS. So the requirement is when we are sharing a dashboard & if there are any maintenance activities there, we need to add a message/banner/sc... See more...
Hi fsoengn, Thanks for the reply. Mine is AppDynamics SaaS. So the requirement is when we are sharing a dashboard & if there are any maintenance activities there, we need to add a message/banner/scroll @ the top so that they will know the application is down due to maintenance. I didn't find any option currently, but I do remember older versions having something similar. Thanks.  
Yes this is possible by using force_local_processing=true   force_local_processing = <boolean> * Forces a universal forwarder to process all data tagged with this sourcetype locally before forwardi... See more...
Yes this is possible by using force_local_processing=true   force_local_processing = <boolean> * Forces a universal forwarder to process all data tagged with this sourcetype locally before forwarding it to the indexers. * Data with this sourcetype is processed by the linebreaker, aggerator, and the regexreplacement processors in addition to the existing utf8 processor. * Note that switching this property potentially increases the cpu and memory consumption of the forwarder. * Applicable only on a universal forwarder. * Default: false You should carefully consider if this option is right for you before deploying it. Read and understand the warning in the spec file (above). By parsing on a UF you are creating a "special snowflake" in your environment where data is parsed somewhere unusual. Props.conf [my_sourcetype] # Use with caution. In most cases its best to let the the parsing occur on a Splunk enterprise server force_local_processing = true LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false MAX_TIMESTAMP_LOOKAHEAD = ... TIME_FORMAT = ... TIME_PREFIX = ^ TRANSFORMS = my_sourcetype_dump_extra_events Transforms.conf [my_sourcetype_dump_extra_events] REGEX = discard_events_that_match_this_regex DEST_KEY = queue FORMAT = nullQueue Note that if you want to nullqueue/discard all events EXCEPT for those that match a regular expression, the usual documented method won't work (as far as my testing has revealed): https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues You will instead need to use a negative assertion REGEX like so: [my_sourcetype_dump_extra_events] REGEX = ^((?!keep_events_that_match_this_regex).)*$ DEST_KEY = queue FORMAT = nullQueue In my testing, discard events on UF's using force_local_processing and a negative assertion caused no measurable increase in CPU, Memory, Disk IO or Network traffic. I used the below query to check how much data was being sent from the UF to the indexers, and it showed a huge reduction: | mstats sum(spl.mlog.tcpin_connections.kb) as kb where index=_metrics group="tcpin_connections" fwdType="uf" hostname=UF_NAME span=5m | timechart span=5m sum(kb)  
I have a data source of significant size and I want to filter a large percentage of the data on the UF so it isnt sent to the Splunk indexers. How can this be done?
I have a question about modify kvstore configuration in search head cluster environment.   I have created kvstore with lookup editor app from a search head instance. Now, I would like to add a new ... See more...
I have a question about modify kvstore configuration in search head cluster environment.   I have created kvstore with lookup editor app from a search head instance. Now, I would like to add a new column. So, I have to modify from collections.conf right?. However, the configuration is not on SHC but search head instances. What is the best way to add a new column of kvstore?   Thank you
I used spath command but didn't work.
OK, cool, thanks for the more info. That’s giving me more confidence that what you were originally seeing is a reflection of data roll-ups. That being said, I think the “more recent” timestamp is “co... See more...
OK, cool, thanks for the more info. That’s giving me more confidence that what you were originally seeing is a reflection of data roll-ups. That being said, I think the “more recent” timestamp is “correct” because the value at that timestamp represents a roll-up of the past X amount of time. For your use-case and reliably grabbing the “past minute”, I wonder if it would be a good idea to make that minute well-defined by specifying start_time and end_time instead of just “-1m” so you avoid edge cases where a datapoint might arrive late for some reason out of your control (network latency, java agent metric export, etc). So maybe the minute you query is something like from (now -2 mins) to (now -1 min). Once the data arrives at the Observability Cloud ingest endpoint, I don’t think you have to worry about any delay with ingest. The data will be recorded as it streams in even when something like a chart visualization appears to have a delay in drawing. I’d be more concerned about any potential latency from the point in time that the metric is recorded (e.g. garbage collection in the java agent) to the time it takes for the agent to export that datapoint and the time it takes for that datapoint to traverse the network to the ingest endpoint. The timestamp on the datapoint will reflect the time it was recorded even if takes extra time for that datapoint to arrive at ingest (e.g., datapoint is recorded by java agent at 16:04:01 but arrives at ingest endpoint at 16:04:45 due to some temporary network condition)
I use metadata to monitor the activity status of member nodes in my cluster, but recently I discovered an exception. My SHC member 01 was found to be inactive, and the last time metadata was sent was ... See more...
I use metadata to monitor the activity status of member nodes in my cluster, but recently I discovered an exception. My SHC member 01 was found to be inactive, and the last time metadata was sent was a long time ago. However, when I checked my SHC cluster member status in the background, it was always in the up state, and the last time it was sent to the administrator was also recently. I restarted my member 1, but it seems that the latest time of member 1 cannot be seen in the metadata
I usually use a combination of the .conf VSCode linter that others have suggested for writing, and then upon committing I have AppInspect and the Splunk Packaging Tool run for my apps, and this keeps... See more...
I usually use a combination of the .conf VSCode linter that others have suggested for writing, and then upon committing I have AppInspect and the Splunk Packaging Tool run for my apps, and this keeps them bug free and knowing that I will pass cloud verification. I will also drop these since I wrote them and am biased, but I use them myself for writing SPL in VSCode: Splunk Search Syntax Highlighter Extension  and Splunk Search Autocompletion Tool 
Try something like this | rex field=httpMessage.requestHeaders "User-Agent: (?<useragent>.*?)\\r\\n"
Does a Heavy Forwarder support output via HTTPOUT? I've seen conflicting posts saying it's not supported and it is supported. I've configured it and it never attempts to send any traffic.
The issue has been identified. When the other agency pushed SplunkForwarder 9.2.6.0 to my hosts, NT SERVICE/SplunkForwarder was removed as a member of the performance monitor users’ group. That agenc... See more...
The issue has been identified. When the other agency pushed SplunkForwarder 9.2.6.0 to my hosts, NT SERVICE/SplunkForwarder was removed as a member of the performance monitor users’ group. That agency used Ivanti Patch for Endpoint Manager to push the updates. With one of the hosts on 9.2.6.0, I kicked off the repair to SplunkForwarder and perfmon counters started to come in at the interval that was set for that host. I next moved to a powershell command to add NT SERVICE/SplunkForwarder back as a member of the performance monitor users group. I asked the tech for a copy of the syntax used to push SplunkForwarder to my hosts to go over and validate. I’m asking support about it too, has there been any known issues with Ivanti pushing SplunkForwarder updates?