All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick , Okay thanks for your answer, i did check both "| rest /data/indexes/myindex" and btool as you mentioned and both have maxTotalDataSizeMB to 5000 (5GB). I can't check through the GUI "S... See more...
@PickleRick , Okay thanks for your answer, i did check both "| rest /data/indexes/myindex" and btool as you mentioned and both have maxTotalDataSizeMB to 5000 (5GB). I can't check through the GUI "Settings->Indexes" but i guess it's not that important.
Hi @santhipriya , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
One more important things to check: splunk btool indexes list --debug This will give you an overview of the settings which are applied to your indexes along with where they are defined. Make sure y... See more...
One more important things to check: splunk btool indexes list --debug This will give you an overview of the settings which are applied to your indexes along with where they are defined. Make sure your settings are defined in proper places https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles
Yes. That's a valid point. That's just one of the specific cases of my general remarks of mixing the same space both as volume-based definition and direct directory "pointer". Theroretically, you co... See more...
Yes. That's a valid point. That's just one of the specific cases of my general remarks of mixing the same space both as volume-based definition and direct directory "pointer". Theroretically, you could use $SPLUNK_DB as your volume location but: 1.  There are some default indexes which write there (like all the _internal and other underscore indexes) and you'll have to make sure to relocate/redefine all of them, which might be tricky to keep synced with new software releases which might introduce new indexes (like _configtracker). 2. $SPLUNK_DB does not contain just indexes but also - for example - kvstore contents (and its backups).
Well, the OP explicitly said that they verified the macro, the tags and so on. So while the symptoms were similar (couldn't find the datamodel), the reason would probably have been different. Just p... See more...
Well, the OP explicitly said that they verified the macro, the tags and so on. So while the symptoms were similar (couldn't find the datamodel), the reason would probably have been different. Just pointing this out so that we avoid confusion and people can benefit from finding the right answer for their problem in the future
Hi @isoutamo , Thanks for your input, but that's not the issue there, i already did clean my saturated index and restarted the indexer and it works fine now. And as I said to @richgalloway , in my p... See more...
Hi @isoutamo , Thanks for your input, but that's not the issue there, i already did clean my saturated index and restarted the indexer and it works fine now. And as I said to @richgalloway , in my post i stated that only one of my indexes was taking way more space than it should and i know which one. The issue is why it did exceed the maxTotalDataSizeMB set in the indexes.conf ? Just adding more space might not be the right solution for us, but i keep in mind the whole thing around using volumes for a better planning of the data storage, thanks.
Close. But not complete.   index=* [| inputlookup numbers.csv | rename number as search | table search | format ] Without the final format command Splunk will use only first row of the subsearch r... See more...
Close. But not complete.   index=* [| inputlookup numbers.csv | rename number as search | table search | format ] Without the final format command Splunk will use only first row of the subsearch results as a condition. So it will only look for the first value from the lookup.  
@richgalloway ,  Maybe my post was not clear enough sorry, i did state that one of my index on the partition (and i already know which one, the one i gave in the indexes.conf) is saturated with warm... See more...
@richgalloway ,  Maybe my post was not clear enough sorry, i did state that one of my index on the partition (and i already know which one, the one i gave in the indexes.conf) is saturated with warm buckets (db_*) and taking all the space available, even though it's configurate as shown in the indexes.conf. Of course multiple indexes are using the disk, but only one went highly above the maxTotalDataSizeMB and saturated it.
Hi @bowesmana, Events are not showing as expected after selecting "show source".  
Have you tried to stop Splunk, removing the mongod.lock file and then start Splunk again?
Hi @Naa_Win , let me understand: you want to send data from abc servers to new index and all the others to the old one, is it correct? you could try something like this: monitor:///usr/local/apps... See more...
Hi @Naa_Win , let me understand: you want to send data from abc servers to new index and all the others to the old one, is it correct? you could try something like this: monitor:///usr/local/apps/logs/*/base_log/*/*/*/*.log] disabled = 0 sourcetype = base:syslog index = base host_segment = 9 blacklist1 = /usr/local/apps/logs/*/base_log/*/*/*xyz*/*.log blacklist2 = /usr/local/apps/logs/*/base_log/*/*/*abc*/*.log monitor:///usr/local/apps/logs/*/base_log/*/*/*xyz*/*.log] disabled = 0 sourcetype = base:syslog index = mynewindex host_segment = 9 monitor:///usr/local/apps/logs/*/base_log/*/*/*abc*/*.log] disabled = 0 sourcetype = base:syslog index = mynewindex host_segment = 9 Ciao. Giuseppe
Hi @Siddharthnegi , as I said, I don't know any tool that autamates the documentation writing of Splunk Dashboard. You could create a python (or another language) that extracts the contents of a da... See more...
Hi @Siddharthnegi , as I said, I don't know any tool that autamates the documentation writing of Splunk Dashboard. You could create a python (or another language) that extracts the contents of a dashboard and copies in a word file, but it must be created from scratch, maybe using ChatGPT or anothe AI. Ciao. Giuseppe  
Hi @jaibalaraman , do you need a dashboard or can you use a report? It's possible to share a report. To share a dashboard users must be authenticated, so the real question is: is it possible to im... See more...
Hi @jaibalaraman , do you need a dashboard or can you use a report? It's possible to share a report. To share a dashboard users must be authenticated, so the real question is: is it possible to implement SSO in Splunk? For more infos see at https://docs.splunk.com/Documentation/UBA/5.4.1/Admin/SSO Ciao. Giuseppe
Check out Embed scheduled reports - Splunk Documentation  You must save your dashboard searches as reports and then you enable embedding.  
like documentation of dashboard , if people want to understand about the dashboard
Hi @Siddharthnegi , what kind of document: a User Manual, or a technical documentation? Anyway I don't know any tool or command that generates documentation about a dashboard. Ciao. giuseppe
Klick on your table in Dashboard Studio and choose Data display --> Header row --> Fixed
Hi @Crotyo , I see from your screenshot that you have results, so what's the issue? Ciao. Giuseppe
Hi @Rak , at first, check the condition of the presence in both the main searches. Then, if you have the stats command you should have statistics, it's strange if you haven't, did you copied all my... See more...
Hi @Rak , at first, check the condition of the presence in both the main searches. Then, if you have the stats command you should have statistics, it's strange if you haven't, did you copied all my search, with also the stats command? Otherwise, please try this: (index=testindex OR index=testindex2 source="insertpath" ErrorCodesResponse=PlanInvalid TraceId=*) OR (index=test ("Test SKU")) | eval type=if(index="test","2","1") | stats earliest('@t') AS '@t' values('@m') AS '@m' values(RequestPath) AS RequestPath dc(type) AS type_count BY TraceId | where type_count=2 | eval date=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%Y-%m-%d"), time=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%H:%M") | fields - '@t' Ciao. Giuseppe
Good Morning! Is the order for the name always the same? So that VZEROP002 is always the first entry in the list? If yes you could try: index=zn| spath "items{0}.state" | spath "items{0}.name"| sea... See more...
Good Morning! Is the order for the name always the same? So that VZEROP002 is always the first entry in the list? If yes you could try: index=zn| spath "items{0}.state" | spath "items{0}.name"| search "items{0}.name"=VZEROP002 "items{0}.state"=1 Do you need the list entries in one event for comparison within the event or could you split them in separate events?