All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I want Splunk to ingest my AV log. I made the following entry in the inputs.conf file: Note: The log file is a text file with no formatting. [monitor://C:ProgramData\'Endpoint Security'\logs\OnDem... See more...
I want Splunk to ingest my AV log. I made the following entry in the inputs.conf file: Note: The log file is a text file with no formatting. [monitor://C:ProgramData\'Endpoint Security'\logs\OnDemandScan_Activity.log] disable=0 index=winlogs sourcetype=WinEventLog:AntiVirus start_from=0 current_only=0 checkpointInterval = 5 renderXml=false   My question is: Is the stanza written correctly? When I do a search I am not seeing anything.
where can I download Splunk Enterprise 9.2.2?  I have ver: 9.2.1 and it has a Vulnerability Here is the Description:  The version of Splunk installed on the remote host is prior to tested version. ... See more...
where can I download Splunk Enterprise 9.2.2?  I have ver: 9.2.1 and it has a Vulnerability Here is the Description:  The version of Splunk installed on the remote host is prior to tested version. It is, therefore, affected by a vulnerability as referenced in the SVD-2024-0703 advisory
This is a Splunk forum.  No one here knows what your data source looks like. To ask an answerable data analytics question, follow these golden rules; nay, call them the four commandments: Illustrat... See more...
This is a Splunk forum.  No one here knows what your data source looks like. To ask an answerable data analytics question, follow these golden rules; nay, call them the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Please illustrate full message.  The look of the fragment suggest your source is actually JSON, something like   {"message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0... See more...
Please illustrate full message.  The look of the fragment suggest your source is actually JSON, something like   {"message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257", "foo":"bar"}   Is this correct?  Using regex directly on structured data is strongly discouraged as any regex is doomed to be fragile. If the JSON is raw event, Splunk would have already extracted a field called "message".  Start from this field instead.  This field also is structured as KV pairs.  Use kv aka extract instead of regex.   | rename _raw as temp, message as _raw | kv kvdelim=": " pairdelim="," | rename _raw as message, temp as _raw | fields fuel   Your sample data would have given fuel _raw _time 0 {"message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257", "foo":"bar"} 2024-07-17 09:06:35 Here is an emulation for you to play with and compare with real data   | makeresults | eval _raw = "{\"message\":\"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257\", \"foo\":\"bar\"}" | spath ``` data emulation above ```    
  A. The search results are shown below. B. My goals are as follows 1. site1's SH wants to retrieve only the data that site1's indexer has. 2. site2's SH wants to retrieve only the data tha... See more...
  A. The search results are shown below. B. My goals are as follows 1. site1's SH wants to retrieve only the data that site1's indexer has. 2. site2's SH wants to retrieve only the data that site2's indexer has. 3. site1's indexer stores RAW data from site1 and site2. 4. site2's indexer stores only site2's RAW data.   C. Is it possible to configure the following structure?   D.  server.conf Option 1. On site1_SH, there is no difference between the behavior when server.conf is set to site=site0 and when it is set to site=site1. 2. On site2_SH, there is no difference in behavior between setting server.conf to site=site0 and site=site2.  
Hi All, Hope this message finds you well. I have installed splunk on-prem on a linux box as a splunk user and have given proper permissions. The azure VM gets shutsdown automatically at around 11 ... See more...
Hi All, Hope this message finds you well. I have installed splunk on-prem on a linux box as a splunk user and have given proper permissions. The azure VM gets shutsdown automatically at around 11 pm everyday and there is no auto start. For time being we are manually starting the VM. My problem here is while installing the splunk instance, I have run the command enable boot-start and it was successful but the splunkd services does not start on its own.  Can anyone please suggest what can be done to fix it? Thanks in advance
I have a search that yields "message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257" I am trying to extract the bold value associated with fuel, the va... See more...
I have a search that yields "message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257" I am trying to extract the bold value associated with fuel, the value can be any number 0-1000 Using the field extractor I have gotten an unusable rex result:   rex message="^\{"\w+":\d+,"\w+_\w+":"[a-f0-9]+","\w+":"\w+_\w+","\w+_\w+":"\w+","\w+_\w+":"\w+","\w+":\{"\w+":"\w+","\w+":"\w+","\w+":\d+\.\d+,"\w+":\-\d+\.\d+,"\w+":"\w+"\},"\w+_\w+":"\w+","\w+":"\w+::\w+_\w+_\w+:\s+\w+:\s+\d+,\s+\w+:\s+\d+,\s+\w+_\w+:\s+\d+,\s+\w+:\s+\w+,\s+\w+:\s+(?P<fuel_level>\d+)" When trying to search with this, the next command does not work and my result yields: Invalid search command 'a' Can someone give me usable rex to get the highlighted number in a field titled 'fuel_level'
Hi All, any idea, what is the Max Azure storage accounts (limit), we can use to ingest logs to Splunk?   Thanks in advance.....  
Hope someone can assist.  My client needs to be able to read word and other binary files from a dashboard without importing them into Splunk.  He has a fileshare where they store the documents and wo... See more...
Hope someone can assist.  My client needs to be able to read word and other binary files from a dashboard without importing them into Splunk.  He has a fileshare where they store the documents and would like to read the share and have the list show in the dashboard and be able to click on a document from the file share and view the file in it's native application.  Is there a way to do this with Splunk?  
I am exceeding my 5GB license. I have determine the problem by doing a 24 hour search using the following: index="winlogs" host=filesvr souce="WinEventLog:Security" EventCode=4663 Accesses="ReadDat... See more...
I am exceeding my 5GB license. I have determine the problem by doing a 24 hour search using the following: index="winlogs" host=filesvr souce="WinEventLog:Security" EventCode=4663 Accesses="ReadData (or ListDirectory) Security_ID="NT AUTHORITY\SYSTEM" The above search returns 4.5 million plus records. My question is how do I stop Splunk from ingesting     Security_ID="NT AUTHORITY\SYSTEM" of EventCode 4663? Would appreciate any assistance\suggestions given.
Hi, I am facing similar issue, please let us know when you find a solution. Moreover, any of my Windows clients are not shown in the Server Classes. Although the apps are being deployed successfull... See more...
Hi, I am facing similar issue, please let us know when you find a solution. Moreover, any of my Windows clients are not shown in the Server Classes. Although the apps are being deployed successfully. Any idea?
Hi everyone! This issue is exclusively for splunk universal forwarder v9.2.1 .. what’s happening here is the script  is dumping yum updates check on satellite thereby filling all the space in the ser... See more...
Hi everyone! This issue is exclusively for splunk universal forwarder v9.2.1 .. what’s happening here is the script  is dumping yum updates check on satellite thereby filling all the space in the servers. When checked ingernal logs, it seems the update.sh is installing the older version of these satellite linux packages and then throwing message as message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/update.sh" Not using downloaded (satellite package name like rhel..blah..blah) because it is older than what we have: Have any  one faced this particular issue? I am not able to understand why is the update.sh trying to install the older packages in the first place ... Can anyone suggest what can be done to resolve it? Thanks. 
I am trying to query our windows and linux indexes to verify how many times a user has logged in over a period of time.   Currently, I only care about the last 7 days. I've tried to run some querie... See more...
I am trying to query our windows and linux indexes to verify how many times a user has logged in over a period of time.   Currently, I only care about the last 7 days. I've tried to run some queries, but it's not very fruitful.   Can I gain some assistance with generating a query for determining the number of logins over a period of time, please?   Thank you.
Is it possible to use a lookup file in the Noteble Event supression say to look up a list of assets/enviroments that we do/don't want to know about?  
Don't use both INDEXED_EXTRACTIONS = JSON and KV_MODE=json in the same stanza or the fields will be extracted twice. The LINE_BREAKER setting requires a capture group.  Try these settings [custom_... See more...
Don't use both INDEXED_EXTRACTIONS = JSON and KV_MODE=json in the same stanza or the fields will be extracted twice. The LINE_BREAKER setting requires a capture group.  Try these settings [custom_json_sourcetype] SHOULD_LINEMERGE = false KV_MODE = json LINE_BREAKER = }(,\s*){
RF/SF does not apply to SmartStore so the storage usage would be 5TB.  Of the 5TB. the hot buckets would be on the indexers (and replicated) and the rest would be in S2.
Please post the SPL as text rather than as screen shots. It looks like the first search would become a subsearch within the second search.
Thanks Abraham, this helped.
After increasing this attribute (MAX_TIMESTAMP_LOOKAHEAD ) setting in props resolve my issue. Thanks to all the Splunk Trust people. I am accepting this solution.