All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a search that yields "message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257" I am trying to extract the bold value associated with fuel, the va... See more...
I have a search that yields "message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257" I am trying to extract the bold value associated with fuel, the value can be any number 0-1000 Using the field extractor I have gotten an unusable rex result:   rex message="^\{"\w+":\d+,"\w+_\w+":"[a-f0-9]+","\w+":"\w+_\w+","\w+_\w+":"\w+","\w+_\w+":"\w+","\w+":\{"\w+":"\w+","\w+":"\w+","\w+":\d+\.\d+,"\w+":\-\d+\.\d+,"\w+":"\w+"\},"\w+_\w+":"\w+","\w+":"\w+::\w+_\w+_\w+:\s+\w+:\s+\d+,\s+\w+:\s+\d+,\s+\w+_\w+:\s+\d+,\s+\w+:\s+\w+,\s+\w+:\s+(?P<fuel_level>\d+)" When trying to search with this, the next command does not work and my result yields: Invalid search command 'a' Can someone give me usable rex to get the highlighted number in a field titled 'fuel_level'
Hi All, any idea, what is the Max Azure storage accounts (limit), we can use to ingest logs to Splunk?   Thanks in advance.....  
Hope someone can assist.  My client needs to be able to read word and other binary files from a dashboard without importing them into Splunk.  He has a fileshare where they store the documents and wo... See more...
Hope someone can assist.  My client needs to be able to read word and other binary files from a dashboard without importing them into Splunk.  He has a fileshare where they store the documents and would like to read the share and have the list show in the dashboard and be able to click on a document from the file share and view the file in it's native application.  Is there a way to do this with Splunk?  
I am exceeding my 5GB license. I have determine the problem by doing a 24 hour search using the following: index="winlogs" host=filesvr souce="WinEventLog:Security" EventCode=4663 Accesses="ReadDat... See more...
I am exceeding my 5GB license. I have determine the problem by doing a 24 hour search using the following: index="winlogs" host=filesvr souce="WinEventLog:Security" EventCode=4663 Accesses="ReadData (or ListDirectory) Security_ID="NT AUTHORITY\SYSTEM" The above search returns 4.5 million plus records. My question is how do I stop Splunk from ingesting     Security_ID="NT AUTHORITY\SYSTEM" of EventCode 4663? Would appreciate any assistance\suggestions given.
Hi, I am facing similar issue, please let us know when you find a solution. Moreover, any of my Windows clients are not shown in the Server Classes. Although the apps are being deployed successfull... See more...
Hi, I am facing similar issue, please let us know when you find a solution. Moreover, any of my Windows clients are not shown in the Server Classes. Although the apps are being deployed successfully. Any idea?
Hi everyone! This issue is exclusively for splunk universal forwarder v9.2.1 .. what’s happening here is the script  is dumping yum updates check on satellite thereby filling all the space in the ser... See more...
Hi everyone! This issue is exclusively for splunk universal forwarder v9.2.1 .. what’s happening here is the script  is dumping yum updates check on satellite thereby filling all the space in the servers. When checked ingernal logs, it seems the update.sh is installing the older version of these satellite linux packages and then throwing message as message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/update.sh" Not using downloaded (satellite package name like rhel..blah..blah) because it is older than what we have: Have any  one faced this particular issue? I am not able to understand why is the update.sh trying to install the older packages in the first place ... Can anyone suggest what can be done to resolve it? Thanks. 
I am trying to query our windows and linux indexes to verify how many times a user has logged in over a period of time.   Currently, I only care about the last 7 days. I've tried to run some querie... See more...
I am trying to query our windows and linux indexes to verify how many times a user has logged in over a period of time.   Currently, I only care about the last 7 days. I've tried to run some queries, but it's not very fruitful.   Can I gain some assistance with generating a query for determining the number of logins over a period of time, please?   Thank you.
Is it possible to use a lookup file in the Noteble Event supression say to look up a list of assets/enviroments that we do/don't want to know about?  
Don't use both INDEXED_EXTRACTIONS = JSON and KV_MODE=json in the same stanza or the fields will be extracted twice. The LINE_BREAKER setting requires a capture group.  Try these settings [custom_... See more...
Don't use both INDEXED_EXTRACTIONS = JSON and KV_MODE=json in the same stanza or the fields will be extracted twice. The LINE_BREAKER setting requires a capture group.  Try these settings [custom_json_sourcetype] SHOULD_LINEMERGE = false KV_MODE = json LINE_BREAKER = }(,\s*){
RF/SF does not apply to SmartStore so the storage usage would be 5TB.  Of the 5TB. the hot buckets would be on the indexers (and replicated) and the rest would be in S2.
Please post the SPL as text rather than as screen shots. It looks like the first search would become a subsearch within the second search.
Thanks Abraham, this helped.
After increasing this attribute (MAX_TIMESTAMP_LOOKAHEAD ) setting in props resolve my issue. Thanks to all the Splunk Trust people. I am accepting this solution.
Hi @smichalski , I’m a Community Moderator in the Splunk Community. This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend... See more...
Hi @smichalski , I’m a Community Moderator in the Splunk Community. This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Thanks!
hi, try | makeresults | eval date="Nov 16 10:00:57 2024" | eval epoch_time=strptime(date, "%b %d %H:%M:%S %Y") | fields epoch_time regards, Abraham
Recently I have encountered an issue while rebuilding data on one of our indexers. During this process I needed to execute the following command: /opt/splunk/bin/splunk _internal call /data/indexes/... See more...
Recently I have encountered an issue while rebuilding data on one of our indexers. During this process I needed to execute the following command: /opt/splunk/bin/splunk _internal call /data/indexes/main/rebuild-metadata-and manifests However upon running, I was prompted for Splunk Username and Password. Typically we used the credentials created at Web GUI. But since the usually the indexers Web GUI is set to false most of the time, so there is no GUI username and password available on them. I tried using my Search Head Username and Password, followed by the OS Username and Password, but neither worked. After some research, I discovered that every Splunk instance includes a default admin user created during installation: Username: admin Password: changeme but it doesn't work for me. Here is the procedure that finally worked for me, so to reset the password for the admin user Access the indexers cli, here the passwd file exist in: /opt/splunk/etc/ Rename that file to passwd.bak Create a new file with the name: user-seed.conf, in location: /opt/splunk/etc/system/local/ In this file use the below configuration: [user_info] USERNAME = admin PASSWORD = <password of your choice> Restart the Splunk service on that indexer using /opt/splunk/bin/splunk restart This will generate a new passwd file. You can now use the admin user with the password you set in step 3.   After the resting the password, I've used the initial command, using the updated admin credentials and it worked.
Team, wanted to convert below time into epoc time. Please help. time - Nov 16 10:00:57 2024
Depending on your deployment, it might be worth considering switching to the Microsoft JDBC driver which is suggested in Splunk's documentation. However JTDS might still work. By default, JTDS does ... See more...
Depending on your deployment, it might be worth considering switching to the Microsoft JDBC driver which is suggested in Splunk's documentation. However JTDS might still work. By default, JTDS does not use SSL for the connection which is causing this error. Append the following to the JDBC URL in the connection configuration page: ;ssl=require;   Feel free to share your connection string, redacted as appropriate.