All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Good Morning! Is the order for the name always the same? So that VZEROP002 is always the first entry in the list? If yes you could try: index=zn| spath "items{0}.state" | spath "items{0}.name"| sea... See more...
Good Morning! Is the order for the name always the same? So that VZEROP002 is always the first entry in the list? If yes you could try: index=zn| spath "items{0}.state" | spath "items{0}.name"| search "items{0}.name"=VZEROP002 "items{0}.state"=1 Do you need the list entries in one event for comparison within the event or could you split them in separate events?    
Hello, My apologies, I hope this makes sense, still learning.  I have events coming in that look like this: I need to create an alert for when state = 1 for name = VZEROP002.  But, I can't figu... See more...
Hello, My apologies, I hope this makes sense, still learning.  I have events coming in that look like this: I need to create an alert for when state = 1 for name = VZEROP002.  But, I can't figure out how to write the query to only look at the state for VZEROP002.  The query I'm running is: index=zn | spath "items{1}.state" | search "items{1}.state"=1   But, the search results still return events where VZEROP002 has a state of 2, and VZEROP001 has the state of 1. I hope that makes sense, and thanks in advance for any help with this. Thanks, Tom    
I usually have to make document of splunk dashboard and its really time consuming as well , so I was thinking maybe I can automate it. So that it can make a simple document of any dashboard. Is it po... See more...
I usually have to make document of splunk dashboard and its really time consuming as well , so I was thinking maybe I can automate it. So that it can make a simple document of any dashboard. Is it possible?
Hi All I there any way to freeze the tile in the dashboard when we scroll down in the dashboard.   
Hi  Any help or use case for the below question ?? How do i share a dashboard to the internal team as an URL link , where it won't ask to enter user name and password and login directly into the da... See more...
Hi  Any help or use case for the below question ?? How do i share a dashboard to the internal team as an URL link , where it won't ask to enter user name and password and login directly into the dashboard as Read only ( Dashboard Studio).
Here's picture of my csv files and search result. It only display the result for the first number. When I search using OR, it does display correctly      
I'm afraid I met the same issue described in the original question at that time: I couldn't map data into the data model. The problem  was related to the macro (cim_Network_Resolution_indexes) define... See more...
I'm afraid I met the same issue described in the original question at that time: I couldn't map data into the data model. The problem  was related to the macro (cim_Network_Resolution_indexes) defined in the constraint of the Network Resolution (DNS) data model. I believe the person who asked this question several years ago might also be a beginner like me :). So, since I've solved the problem, the comment I left here was to help anyone else who might get stuck on this issue. Sorry for any inconvenience (if any) caused by bringing up this question. 
The question is, what can a tween do? Can this question get a library card? Can it make a facebook account?
Why don't you go to that event, do show source (or copy the raw event) into a new file and then go into Settings->Add data and upload that file and experiment with the props in the UI until... See more...
Why don't you go to that event, do show source (or copy the raw event) into a new file and then go into Settings->Add data and upload that file and experiment with the props in the UI until you can see it working
If it's ok to put some old files/logs into frozen state (I suppose that you have cold2frozen script on place, or you don't need those old events) then you can put your indexer into detention mode (it... See more...
If it's ok to put some old files/logs into frozen state (I suppose that you have cold2frozen script on place, or you don't need those old events) then you can put your indexer into detention mode (it's denying all new connections / indexing) and update min free space into some smaller value. Also you must check e.g. with "du -sh $SPLUNK_DB) which indexes are biggest / where you could archive some buckets. Based on that just update max retention time on indexes.conf for those. Then start splunk and wait that it archive those and you will get more space. Of course it you could just add more space into that filesystem it's probably the best way to fix the situation and get spunk up and running.  BUT after that I said that you must plan your data storage to use volumes (with separate filesystems) and update indexes definitions to use those volumes. This needs some planning and also some service break time. There are in splunk docs and in community how to move current indexes to another directories on indexer. Just follow those instructions or hire any splunk partner/PS or other consultant who could to it for you.
Hi One comment to use $SPLUNK_DB in volume definition.  Actually splunk use $SPLUNK_DB on different things and storing different stuff there. This means than when you are defining inside volume tha... See more...
Hi One comment to use $SPLUNK_DB in volume definition.  Actually splunk use $SPLUNK_DB on different things and storing different stuff there. This means than when you are defining inside volume that it's path = $SPLUNK_DB and set some size for it, it applies only for that volumes. When you have e.g. other indexes and some other stuff in same filesystem where your $SPLUNK_DB is, I think that spunk cannot count those size for that total volume sizes. It just counts those indexes which has definition to use that volume! Basically this means that your volume could be come to full and this will stopped splunk, even you have add enough low max volume size attribute for volume. For that reason I suggest that you shouldn't ever user $SPLUNK_DB as on any volume path/dir. You should always use some other separate filesystem in separate LV volume etc.  To be honest, I haven't test this is my lab to verify that my assumption is correct, but maybe other have done this test? r. Ismo
https://community.splunk.com/t5/Knowledge-Management/New-Splunk-Metrics-logging-interval/m-p/705328/highlight/true#M10338
Not one of the change to qualify as new, as it's rather an improvement. It's documented in limits.conf. interval = <integer> * Number of seconds between logging splunkd metrics to metrics.log... See more...
Not one of the change to qualify as new, as it's rather an improvement. It's documented in limits.conf. interval = <integer> * Number of seconds between logging splunkd metrics to metrics.log for different subgroups. * Check metrics.log for the list of configurable "metrics_modules". * Set "interval" under the desired "metrics_module" stanza. * Example: * If you want 60 seconds metrics logging interval for "thruput:thruput", * [thruput:thruput] * interval = 60 * Minimum value is 10 seconds. * Valid value is multiple of 10. * If value is not exact multiple of 10, it will be adjusted to nearest downward multiple. * Recommended value multiple of 30. Splunk will decide how often to check for metrics reporting based on greatest common divisor across different values. If "interval" is set 30, 40 for two different components, then greatest common divisor for 30, 40 and 60(default) is 10. It's expensive for metrics reporting thread to log every 10 sec. If "interval" is set 30, 900 for two different components, then greatest common divisor for 30, 90 and 60(default) is 30. It's less expensive for metrics reporting thread to log every 30 sec. * Default : "interval" config value set under [metrics] stanza. https://docs.splunk.com/Documentation/Splunk/latest/Admin/limitsconf
In the process of upgrading our splunk enterprise. Currently on version 7.1.3 (I know, super old, bear with me). Installed the Splunk Platform Readiness App v.2.2.1, set the permissions to write as t... See more...
In the process of upgrading our splunk enterprise. Currently on version 7.1.3 (I know, super old, bear with me). Installed the Splunk Platform Readiness App v.2.2.1, set the permissions to write as the documentation states. Go to launch the app and I get this error:  Error reading progress for user: <me> on host <hostname>   Dig a bit more into it and realize that the Splunk Platform Readiness App uses the KV store. Run into these errors: KV Store process terminated abnormally (exit code 14, status exited with code 14) See mongod.log and splunkd.log for details KV Store changed status to failed. KV Store process terminated. Failed to start KV Store process. See mongod.log and splunkd.log for details. *******Splunk is running on Windows Server******* I tried renaming the server.pem file in Splunk/etc/auth and restarting - it made a new server.pem file, same issues persist. Attempted to look into the mongod.log and splunkd.log but I'm not sure what I should be looking for.  Haven't yet tried to rename the mongo folder in /vat/lib/splunk/kvstore to mongo(old), as I saw that it worked for some other people with the same issue.     Did some more troubleshooting: renamed the mongo folder to mongo(old) and it recreated a new one. Same issues as before. Looked in the mongod.log file and found this: Detected unclean shutdown - C:\Program Files\Splunk\var\lib\kvstore\mongo\mongod.lock is not empty. InFile::open(), CreateFileW for C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\journal\lsn failed with Access is denied.
I also had this problem and managed to get the inputs page to load by changing the locale, specifically from en-US to en-GB.   https://myserver.com:8000/en-US/app -> https://myserver.com:8000/en-GB/... See more...
I also had this problem and managed to get the inputs page to load by changing the locale, specifically from en-US to en-GB.   https://myserver.com:8000/en-US/app -> https://myserver.com:8000/en-GB/app Not sure if that fixes it on your end but its worth a try.
Se quiser simplesmente efetuar uma busca de emprego e obter os resultados de uma só vez, em seguida, envie o seu pedido HTTP para https://edp.splunkcloud.com:8089/services/search/v2/jobs/export Isto... See more...
Se quiser simplesmente efetuar uma busca de emprego e obter os resultados de uma só vez, em seguida, envie o seu pedido HTTP para https://edp.splunkcloud.com:8089/services/search/v2/jobs/export Isto funciona se o seu busca de emprego não demorar muito tempo. Se a sua busca de emprego demorar muito tempo, a ligação pode esgotar-se. Se isto não funcionar, então terá de enviar a busca de emprego e obter um SID como está a fazer atualmente. Quando a busca de emprego estiver concluída, é necessário enviar o pedido HTTP para https://edp.splunkcloud.com:8089/search/jobs/{search_id}/results para obter os resultados
That worked; not sure why that was the case -- I will note you weren't correct in regards to "Ingest settings" but for some reason the asset defaulted to the Event label instead of "events" and this ... See more...
That worked; not sure why that was the case -- I will note you weren't correct in regards to "Ingest settings" but for some reason the asset defaulted to the Event label instead of "events" and this connection, once severed, updated my labels and removed Event
You probably want the Splunk Add on for Microsoft Azure (https://splunkbase.splunk.com/app/3757) There are set-up instructions described at https://github.com/splunk/splunk-add-on-microsoft-azure/wi... See more...
You probably want the Splunk Add on for Microsoft Azure (https://splunkbase.splunk.com/app/3757) There are set-up instructions described at https://github.com/splunk/splunk-add-on-microsoft-azure/wiki (see the sections on Configuration) on the right.
There must be a tab in Asset Configuration called "Ingest Settings", in the middle between Asset Settings and Approval Settings. In that area you can specify the label to apply to created objects fro... See more...
There must be a tab in Asset Configuration called "Ingest Settings", in the middle between Asset Settings and Approval Settings. In that area you can specify the label to apply to created objects from the app. Since this is missing in your "splunk" asset, something is broken. You might need to delete the asset and re-create it to get it to let go of the label.
We've established the disk is very full, but have not established what is using that space.  I suspect several indexes are combining to fill up the disk, but the du utility can verify that.