All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Where and how I need to configure inputs.conf for my data inputs? I don't have any log path to give monitor stanza in inputs.conf. what to give exactly in inputs.conf? https://splunkbase.splunk.com/... See more...
Where and how I need to configure inputs.conf for my data inputs? I don't have any log path to give monitor stanza in inputs.conf. what to give exactly in inputs.conf? https://splunkbase.splunk.com/app/4310 This is the app I am trying to install...
Hey Will, @livehybrid, you’re even faster than GPT! We've already upgraded our RAM from 32GB to 64GB.
Hi @uagraw01  It sounds like your Splunk server is running out of RAM. Please could you confirm how much RAM your server has, you could run the following and let us know what is returned?   index... See more...
Hi @uagraw01  It sounds like your Splunk server is running out of RAM. Please could you confirm how much RAM your server has, you could run the following and let us know what is returned?   index=_introspection host=YourHostname component=HostWide earliest=-60m | dedup data.instance_guid | table data.mem*   and   | rest /services/server/info splunk_server=local | table guid host physicalMemoryMB   Also, have you recently added a large number of KV Store objects which might have caused the memory usage to grow quickly?  I think the below query should show how big the KV Store is, please let us know what you get back | rest /services/server/introspection/kvstore/collectionstats | mvexpand data | spath input=data | rex field=ns "(?<App>.*)\.(?<Collection>.*)" | eval dbsize=round(size/1024/1024, 2) | eval indexsize=round(totalIndexSize/1024/1024, 2) | stats first(count) AS "Number of Objects" first(nindexes) AS Accelerations first(indexsize) AS "Acceleration Size (MB)" first(dbsize) AS "Collection Size (MB)" by App, Collection It could be that you either need to increase RAM to accommodate the demand on the server. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hello Splunkers!! We are experiencing frequent KV Store crashes, which are causing all reports to stop functioning. The error message observed is: "[ReplBatcher] out of memory." This issue is ... See more...
Hello Splunkers!! We are experiencing frequent KV Store crashes, which are causing all reports to stop functioning. The error message observed is: "[ReplBatcher] out of memory." This issue is significantly impacting our operations, as many critical reports rely on KV Store for data retrieval and processing. Please help me to get it fix.   Thanks in advance!!  
See my earlier example of the event journey through all layers. If you define a modular input with a destination index of "main" it will be sent as an event destined for the "main" index. The indexe... See more...
See my earlier example of the event journey through all layers. If you define a modular input with a destination index of "main" it will be sent as an event destined for the "main" index. The indexers will not "pick up" anything. You must configure the destination index in the appropriate input stanza on the HF. I don't know the particular TA you're talking about but if it's badly written it might not be allowing you to do so from the UI and you might have to edit the conf files directly.
I remember being able to install apps from a zip file from the web GUI on physical Splunk installations. On the other hand, I got an idea. It may be a stupid question , but is it possible to configu... See more...
I remember being able to install apps from a zip file from the web GUI on physical Splunk installations. On the other hand, I got an idea. It may be a stupid question , but is it possible to configure a tcpout output on the Splunk web UI? If yes, then there is no need for a separate second app. Then I would only need to add the transforms and props configs, and have the users configure the tcpout on their own, and that's it. Just to give you an idea, I want to package something similar to the SC4S heavy forwarder configs described here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/
  My source is like below i want whatever "$entityToken$" value=name is ,i want it to assign to the keyword also in *$entityToken$:init:data:invoke* ex    ("/a/b/api-data-$stageToken$-name") *na... See more...
  My source is like below i want whatever "$entityToken$" value=name is ,i want it to assign to the keyword also in *$entityToken$:init:data:invoke* ex    ("/a/b/api-data-$stageToken$-name") *name:init:data:invoke* <query>index=* source IN                ("/a/b/api-data-$stageToken$-$entityToken$") *$entityToken$:init:data:invoke* | eval identifier=coalesce('queryParams.identifier', 'event.queryStringParameters.identifier') | eval msg=coalesce(msg, null) | where isnotnull(identifier) and isnotnull(msg) | stats count</query>
Hi I am adding the query to my dashboard and when i click on highlighted run search  it is not taking to the search one.  after index and source the keyword search is *word:is:info:name:in:data:inv... See more...
Hi I am adding the query to my dashboard and when i click on highlighted run search  it is not taking to the search one.  after index and source the keyword search is *word:is:info:name:in:data:invok*..Not sure why it is not taking to search page 
Can you elaborate further, how to enable?
@PickleRick @isoutamo I have gone through all the thread. I am configuring Akamai add-on in my environment to get akamai logs. We have installed this add-on on our HF and sending that data to indexe... See more...
@PickleRick @isoutamo I have gone through all the thread. I am configuring Akamai add-on in my environment to get akamai logs. We have installed this add-on on our HF and sending that data to indexers (CM which configured indexer discovery). I think it will come under modular inputs. I have created an index in CM and pushed it to indexers. Now in add-on if I keep main index (which is showing in drop-down in that data input) and forward the logs to indexers, how will indexers pick the desired index (which is created) for these data input (akamai) logs? Where to configure this? This data input will not have any log path right to configure it in inputs.conf? Bi.t confused on this. Can you please clarify? This app came with inputs.conf in default and this is how it is: [TA-AKAMAI_SIEM] index=default sourcetype=akamaisiem interval=60 This app not pushed to indexers only HF it is there.
@isoutamo thank you so much. You guys are like pillars to this community. We freshers are so happy to be here. 
@PickleRick so informative so knowledgeable so glad to have you in this community. Thank you so much.
When your hf isn’t indexing events, it doesn’t take care anything what you have defined in indexes.conf. Only place where it use that conf file is give you list of index names from you could choose i... See more...
When your hf isn’t indexing events, it doesn’t take care anything what you have defined in indexes.conf. Only place where it use that conf file is give you list of index names from you could choose index where events are stored on your indexers. Personally I didn’t configure any index definitions on HFs unless there are any modular inputs, which wants those in GUI and I cannot configure those by .conf files.
Yes, in your screenshot example it works because your pie chart is large.  Reduce the size of the pie chart and you'll notice how fields start to disappear.  This is even more apparent when one or mu... See more...
Yes, in your screenshot example it works because your pie chart is large.  Reduce the size of the pie chart and you'll notice how fields start to disappear.  This is even more apparent when one or multiple fields are much larger values than all of the others.  This happens to me even if you change the "Minimum Size" = 0.  
Usually there is only one outputs.conf file. There is defined where those events are sent. If/when the next step is your indexer cluster then the easiest way is use indexer discovery feature (see docs... See more...
Usually there is only one outputs.conf file. There is defined where those events are sent. If/when the next step is your indexer cluster then the easiest way is use indexer discovery feature (see docs.splunk.com). If the next step is another intermediate forwarder or individual indexer(s) then just add those there. See more from outputs.conf definition. I propose that you will do a separate app, which contains just this outputs.conf and other recommended conf files. If you are packing those apps into zip files, then you must extract those back into correct directory hierarchy in …/etc/apps folder.
Hi @DarthHerm  I saw your message about doing field extractions, in the meantime have a look to see if this search helps? index=your_index sourcetype=_json_AuditFramework | spath input=parameters o... See more...
Hi @DarthHerm  I saw your message about doing field extractions, in the meantime have a look to see if this search helps? index=your_index sourcetype=_json_AuditFramework | spath input=parameters output=parameters | spath input=traceInformation output=traceInformation | eval param_names=mvmap(parameters, mvindex(mvfield(parameters{}.name), mvindex(mvfield(parameters{}.value), 0))) | eval documentId=mvindex(mvfilter(match(param_names, "@DocumentId")), 1) | eval trace_info=mvmap(traceInformation, mvindex(mvfield(traceInformation{}.type), mvindex(mvfield(traceInformation{}.class), mvindex(mvfield(traceInformation{}.method), 0)))) | table documentId, trace_info, serverIPAddress, serverHost, clientIPAddress, userName Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi  @ekmek4  To achieve this type of filtering logic in your dashboard, you need to adjust the search query to handle the different combinations of inputs correctly. The main issue is ensuring that ... See more...
Hi  @ekmek4  To achieve this type of filtering logic in your dashboard, you need to adjust the search query to handle the different combinations of inputs correctly. The main issue is ensuring that the query logic reflects the conditions you described. Here's how you can modify your query to achieve this: Check if procname or procname2 is set: If either is set, filter by those. Check if User is set: If User is set and no process names are set, filter by User. Combine both conditions: If both process names and User are set, filter by both.   Here's a revised version of your dashboard code with the updated query logic: <form version="1.1" theme="light"> <label>Find Network connections(DNS)</label> <fieldset submitButton="false"> <input type="text" token="procname2"> <label>Enter procname: eg. opera.exe</label> <default></default> </input> <input type="dropdown" token="procname" searchWhenChanged="true"> <label>Procname</label> <fieldForLabel>process_name</fieldForLabel> <fieldForValue>process_name</fieldForValue> <search> <query>index=sysmon_wec AND (EventCode=22 OR event_id=22) | dedup process_name | head 1000 | table process_name</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="dropdown" token="user" searchWhenChanged="true"> <label>User</label> <fieldForLabel>User</fieldForLabel> <fieldForValue>User</fieldForValue> <search> <query>index=sysmon_wec AND (EventCode=22 OR event_id=22) | makemv tokenizer="([^\r\n]+)(\r\n)?" User | mvexpand User | where NOT (User="SYSTEM" OR User="NT AUTHORITY\SYSTEM" OR User="NT AUTHORITY\NETWORK SERVICE" OR User="NT AUTHORITY\LOCAL SERVICE") | dedup User | head 1000 | table User</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> </fieldset> <row> <panel> <table> <title>process_name</title> <search> <query> index=sysmon_wec AND (EventCode=22 OR event_id=22) | eval proc_filter=if(len("$procname$") > 0 OR len("$procname2$") > 0, 1, 0) | eval user_filter=if(len("$user$") > 0, 1, 0) | where (proc_filter=1 AND process_name IN ("$procname$", "$procname2$")) OR (user_filter=1 AND User="$user$") | makemv tokenizer="([^\r\n]+)(\r\n)?" User | mvexpand User | where NOT (User="SYSTEM" OR User="NT AUTHORITY\SYSTEM" OR User="NT AUTHORITY\NETWORK SERVICE" OR User="NT AUTHORITY\LOCAL SERVICE") | dedup process_name | head 100 | table process_name, User, ComputerName, QueryName, QueryResults </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form> Key Changes: proc_filter and user_filter: These are temporary fields used to determine if the process name or user filters should be applied. where clause: The logic now checks if either the process name or user filter should be applied, and applies them accordingly. Token Names: Ensure that the token names in your query match those defined in your inputs ($procname$, $procname2$, and $user$). This setup should allow you to filter based on the conditions you described. If both process names and user are selected, it will filter by both. If only one is selected, it will filter by that one. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will      
Ok. Let's try to use an overly complicated example. Let's assume you have a setup like this (fairly unrealistic but shows many options) (1) UF (reading a file) -> (2) intermediate UF -> (3) HF -> (... See more...
Ok. Let's try to use an overly complicated example. Let's assume you have a setup like this (fairly unrealistic but shows many options) (1) UF (reading a file) -> (2) intermediate UF -> (3) HF -> (4) indexer -> (5) another indexer 1. The initial UF has input defined as follows: [monitor:///var/log/somelog.txt] sourcetype=custom:sourcetype index=testindex1 So the initial UF reads the file /var/log/somelog.txt and sends data from it. It's a UF so it doesn't yet break the data into single events, it sends the data in chunks (so called cooked data). Each chunk has the following metadata attached to it: source = /var/log/somelog.txt sourcetype = custom:sourcetype index = testindex1 host = yourhostname 2. Another UF (an intermediate UF) receives the data on its input. Since it's a UF it doesn't do parsing, it doesn't do any indexing, it just sends the data to its output(s) as it received it (still only cooked) so the metadata stays as it was. The data is sent to a HF. 3. A HF is just a name for a Splunk Enterprise instance which does mostly forwarding and does not do any local indexing (has indexAndForward=false). So the HF receives the data on its input. Since it's a full Splunk Enterprise instance, the data is getting parsed which means it goes through all stages like line breaking, timestamp recognition, custom transforms. From this point on your data is processed as "cooked and parsed" (often called "parsed" for short) so each event is processed separately and each event has its own set of metadata. In your case it's at least (we're not digging into how the timestamp is getting assigned at this point): source = /var/log/somelog.txt sourcetype = custom:sourcetype index = testindex1 host = yourhostname _time = <some timestamp assigned to an event> _raw = <raw data of your event> linecount = <number of lines if your original event was multiline> (to be fully honest, I'm not 100% sure if linecount is added at this point or at indexing). Also for some events the metadata could have been additionally altered by transforms but let's not touch that at the moment. Since you're not doing any indexing locally, the event is only sent through an output. The HF at this point doesn't care about the index metadata attached to an event. It just sends it away. 4. Now an indexer receives the event. Since the event is received as parsed data (notice that I'm no longer saying that you receive a chunk of data; you receive a particular event) there is no parsing performed at this point. We're dealing with already parsed data so we don't mangle it anymore (with possible exception for ingest actions but that's not what we want to talk about here). The indexer does have local indexing enabled so it tries to write the event to the index. The metadata attached to the event says index = testindex1 So the indexer searches whether it does have an index called "testindex1" defined in its configuration. If it does, it indexes the event. If it doesn't it drops it or sends to a last chance index if one is defined. But also, as the indexer has output defined, sends the same event to the next indexer. 5. The last step in our path - another indexer - receives an event. It's parsed data so we don't parse it again. And if we have an index called "testindex1" defined, we write the event to that index. If not - again, either drop the event or send it to last chance index. What's important here is that only indexers at step 4 and 5 actually care about the contents of the "index" metadata field. And they do so only while indexing the data. For forwarding it's always just a label glued on to a chunk of data or an individual event (depending whether we're talking about cooked or parsed data). A forwarder has no knowledge beyond 1. The metadata that the preceeding component in the path added to the chunk/event 2. The destination output It doesn't know nor does it care about whether the output is an indexer or another forwarder (either heavy or universal).
Please disregard. Overthinking the issue and looking at setting up some field extractions. 
HI, im trying to create filter for network connections. But i cannot make work few tokens in the same time. I want to create OR expression. In my head its like this: 1. search should work for if i ... See more...
HI, im trying to create filter for network connections. But i cannot make work few tokens in the same time. I want to create OR expression. In my head its like this: 1. search should work for if i put process_name in textfield 2. If process_name select from dropdown along with textfield  - search for both processes.  (process_name IN ("$token1$","$token2$")) 3. If First two are not chosen, but User from User dropdown selected => Filter by User. 4. If one or two process_name tokens used and User selected - filter by chosen proces_names and then by user. I have $procname2$ token for text field and $procname2$ for dropdown of processes.  Both process_name tokens work if dropdown is selected, then search will use both dropdown token and text token. User token doesn't work at all Query for search: index=sysmon_wec AND (EventCode=22 OR event_id=22) AND ((process_name IN ("$procname$", "$procname2$") OR User IN ("$user$")) )| makemv tokenizer="([^\r\n]+)(\r\n)?" User | mvexpand User |where NOT (User="SYSTEM" OR User="NT AUTHORITY\SYSTEM" OR User="NT AUTHORITY\NETWORK SERVICE" OR User="NT AUTHORITY\LOCAL SERVICE") |dedup process_name|head 100| table process_name,User,ComputerName,QueryName,QueryResults   Here is a full code of my dashboard <form version="1.1" theme="light"> <label>Find Network connections(DNS)</label> <fieldset submitButton="false"> <input type="text" token="procname2"> <label>Enter procname:eg.opera.exe</label> <default></default> </input> <input type="dropdown" token="procname" searchWhenChanged="true"> <label>Procname</label> <fieldForLabel>process_name</fieldForLabel> <fieldForValue>process_name</fieldForValue> <search> <query>index=sysmon_wec AND (EventCode=22 OR event_id=22) |dedup process_name|head 1000|table process_name</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="dropdown" token="User" searchWhenChanged="true"> <label>User</label> <fieldForLabel>User</fieldForLabel> <fieldForValue>User</fieldForValue> <search> <query>index=sysmon_wec AND (EventCode=22 OR event_id=22) | makemv tokenizer="([^\r\n]+)(\r\n)?" User | mvexpand User |where NOT (User="SYSTEM" OR User="NT AUTHORITY\SYSTEM" OR User="NT AUTHORITY\NETWORK SERVICE" OR User="NT AUTHORITY\LOCAL SERVICE") |dedup User|head 1000|table User</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> </fieldset> <row> <panel> <table> <title>process_name</title> <search> <query>index=sysmon_wec AND (EventCode=22 OR event_id=22) AND ((process_name IN ("$procname$", "$procname2$") OR User IN ("$user$")) )| makemv tokenizer="([^\r\n]+)(\r\n)?" User | mvexpand User |where NOT (User="SYSTEM" OR User="NT AUTHORITY\SYSTEM" OR User="NT AUTHORITY\NETWORK SERVICE" OR User="NT AUTHORITY\LOCAL SERVICE") |dedup process_name|head 100| table process_name,User,ComputerName,QueryName,QueryResults</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>