All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @bulbulator For sure you have to check internal logs, there can be multiple reason behind this issue.
Hi @Devinz , as all the tokens in Splunk, the field to pass as token to the Correlation Search Title must be in the results of the Correlation Search itself, so, in your first example you would use ... See more...
Hi @Devinz , as all the tokens in Splunk, the field to pass as token to the Correlation Search Title must be in the results of the Correlation Search itself, so, in your first example you would use the $description$ field bu you don't have this field after the stats count BY rule_title command. You have to add to your CS the fields to display in the title. Ciao. Giuseppe
Hi @sajjadali1122 , you did a very large question, briefly, at first restrict as max as possible the time range of your search, avoid commands as join or transaction and be sure to have a performan... See more...
Hi @sajjadali1122 , you did a very large question, briefly, at first restrict as max as possible the time range of your search, avoid commands as join or transaction and be sure to have a performant storage (at least 800 IOPS bettere much more!). Then, if you have a large set of data you can use some acceleration methods that you can find described at  https://docs.splunk.com/Documentation/SplunkCloud/8.1.12/Knowledge/Aboutdatamodels https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Knowledge/Usesummaryindexing https://docs.splunk.com/Documentation/SplunkCloud/8.1.12/Report/Acceleratereports https://www.youtube.com/watch?v=c13phau6zxg https://docs.splunk.com/Documentation/Splunk/9.3.1/Knowledge/Acceleratetables and so on searching "accelerate" on Google. In few words, you can use a summary index in which you store the results of a scheduled search, so you can search on a reducted record or already grouped data. Or, if you have to search on structured data, you could use accelerated Data Models. Ciao. Giuseppe
Hi @Kenny_splunk , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karm... See more...
Hi @Kenny_splunk , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi. I do not understand well the SHC config, [raft_statemachine] disabled = <boolean> * Set to true to disable the raft statemachine. * This feature requires search head clustering to be enabled. ... See more...
Hi. I do not understand well the SHC config, [raft_statemachine] disabled = <boolean> * Set to true to disable the raft statemachine. * This feature requires search head clustering to be enabled. * Any consensus replication among search heads uses this feature. * Default: true replicate_search_peers = <boolean> * Add/remove search-server request is applied on all members of a search head cluster, when this value to set to true. * Requires a healthy search head cluster with a captain.  What changes in a SHC by setting "disabled = true or false"? By default is true. "replicate_search_peers = true" works only if disabled is false.   What does setting this to true or false do to the cluster?
edit the server.conf on the manager node or on the search heads?
Found the problem, and fixed. INFO KeyManagerSearchPeers [601811 TcpChannelThread] - Sending SHC_NODE_HOSTNAME public key to search peer: https://OLDIDX:8089 ERROR SHCMasterPeerHandler [601811 ... See more...
Found the problem, and fixed. INFO KeyManagerSearchPeers [601811 TcpChannelThread] - Sending SHC_NODE_HOSTNAME public key to search peer: https://OLDIDX:8089 ERROR SHCMasterPeerHandler [601811 TcpChannelThread] - Could not send public key to peer=https://OLDIDX:8089 for server=SHC_NODE_HOSTNAME (reason='') Inside the SHC nodes, there was a node to which, probably, time ago, i copied the "distsearch.conf" manually, without deleting all previous peers in UI or restarting  with a clean empty "distsearch.conf". So previous peers remained "as artifacts" (inside a system kv table?), and splunkd read them as active also if not present nor visible in "distsearch.conf" or in UI DistSearch Panel. Simple solution, from a SHC node UI, Delete all peers, one by one (the delete sync with other nodes) Insert again all peers, one by one (the insert sync with other nodes)   After a clean restart, WARNINGS messages with old IDXS/PEERS went away. So, it was a real artifact, i presume inside a system kv table, since on fs no .conf contains them !!! 🤷‍
I’m experiencing slow performance with my Splunk queries, especially when working with large datasets. What are some best practices or techniques I can use to optimize my searches and improve respons... See more...
I’m experiencing slow performance with my Splunk queries, especially when working with large datasets. What are some best practices or techniques I can use to optimize my searches and improve response times? Are there specific commands or settings I should focus on?
You guys are right, and my apologies. i was a bit excited to finally use the forum to test and see how fast the replies were. but i figured it out. the issue was that in the mac terminal, i wrote:  ... See more...
You guys are right, and my apologies. i was a bit excited to finally use the forum to test and see how fast the replies were. but i figured it out. the issue was that in the mac terminal, i wrote:  mv Splunk /opt/  and instead of moving "Splunk" to the directory, it just completely renamed "Splunk" to "opt" for some reason. i  just changed the name back to Splunk and it was up and running.
Hi @mninansplunk    If you're not sure which index contains your data, start with this search:     | tstats count where source="/var/www/html/PIM/var/log/webservices/*" by sourcetyp... See more...
Hi @mninansplunk    If you're not sure which index contains your data, start with this search:     | tstats count where source="/var/www/html/PIM/var/log/webservices/*" by sourcetype index host   This is a fast way to find which indexes contain your data and see the associated hosts and sourcetypes. Once you know the right index, you can do a more detailed search:     index=<your_index> source="/var/www/html/PIM/var/log/webservices/*" | stats count by source sourcetype host     For Files & Directories input - was it a typo there? single forward slashes like this?     /HostName/var/www/html/PIM/var/log/webservices/*   make sure file permissions on your input directory and your Splunk forwarder has access to the path Refer: https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/GetthetutorialdataintoSplunk https://docs.splunk.com/Documentation/SplunkCloud/latest/Search/GetstartedwithSearch https://www.splunk.com/en_us/blog/customers/splunk-clara-fication-search-best-practices.html If this helps, Please UpVote.  
Hello @linaaabad! @MuS solution should give you a good start. Please don't use "join" instead use stats .. by as  above. Refer the below for documentation. https://lantern.splunk.com/Splunk_Pl... See more...
Hello @linaaabad! @MuS solution should give you a good start. Please don't use "join" instead use stats .. by as  above. Refer the below for documentation. https://lantern.splunk.com/Splunk_Platform/Product_Tips/Searching_and_Reporting/Writing_better_queries_in_Splunk_Search_Processing_Language https://conf.splunk.com/watch/conf-online.html?search=PLA1528B#/    
Hi there, without sample events this can be tricky but since you provided the SPL and you join on UserAccountId I assume this field is available in both sourcetypes. If this is case, it would be as... See more...
Hi there, without sample events this can be tricky but since you provided the SPL and you join on UserAccountId I assume this field is available in both sourcetypes. If this is case, it would be as simple as   index=salesforce UserAccountId=* sourcetype="sfdc:user" OR ( sourcetype="sfdc:setupaudittrail" Action=suOrgAdminLogin ) | fields list of fields you want | stats values(*) AS * by _time UserAccountId   Hope this helps ... cheers, MuS  
Please add this  | eval foo=0 | foreach max* [ eval foo='<<FIELD>>'] | fields - max* | rename foo AS max at the end of your SPL  
Hey there! Have you tried executing this use case via no-code automation platforms? I know that Albato has an Integrator that can be used on the free plan. Furthermore, they have a library with se... See more...
Hey there! Have you tried executing this use case via no-code automation platforms? I know that Albato has an Integrator that can be used on the free plan. Furthermore, they have a library with several apps already available: https://albato.com/apps
ok but max is a value that I get from the index and not a value that I attribute.  My problem is that the value I get from the index is the same for all 3 LPARs, I only want to display it 1 time.  
Hi there, if your max value is static, you could do something like this: index=_internal sourcetype=* | timechart span=1h count by sourcetype | eval max=10000000 and this will produce 1 max line o... See more...
Hi there, if your max value is static, you could do something like this: index=_internal sourcetype=* | timechart span=1h count by sourcetype | eval max=10000000 and this will produce 1 max line on the graph like this:   Hope this helps ... cheers, MuS
I need to replace the variables in the field rule_title field that is generated when using the `notable` macro.  I was able to get this search to work but it only works when I table the spec... See more...
I need to replace the variables in the field rule_title field that is generated when using the `notable` macro.  I was able to get this search to work but it only works when I table the specific variable fields. Is there a way I can do that but for all title regardless of title and variable fields?     
Usually (as always, it's a general rule of thumb; impossible to say without a detailed knowledge of your environment and data; YMMV and all the standard disclaimers) fiddling with search concurrency ... See more...
Usually (as always, it's a general rule of thumb; impossible to say without a detailed knowledge of your environment and data; YMMV and all the standard disclaimers) fiddling with search concurrency is not the way to go. You can't get more computing power to run your searches that you have raw performance in your hardware. So even if you raise the concurrency splunk will be able to spawn more processes with searches but they will starve each other of resources because there's only so much iron underneath to use. So check what is eating up your resources, disable unneeded searches, optimize the needed ones, teach your users to write effective searches and so on.
Hey @Meett , this does not solve the issue, I think the culprit is what I've shared in my own comment/reply?