All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As @gcusello says, stats will count the occurrences easily, but only if they are in a multi-value field, so it depends on how your data is actually represented. The following runanywhere example uses... See more...
As @gcusello says, stats will count the occurrences easily, but only if they are in a multi-value field, so it depends on how your data is actually represented. The following runanywhere example uses the lines you gave as an example as the starting point, but your actually data may be different to this. | makeresults | eval _raw="Satisfied Conditions: XYZ, ABC, 123, abc Satisfied Conditions: XYZ, bcd, 123, abc Satisfied Conditions: bcd, ABC, 123, abc Satisfied Conditions: XYZ, ABC, 456, abc" | multikv noheader=t | rename _raw as Condition | table Condition ``` The lines above set up some dummy data - possibly similar to your post? ``` ``` First split out the conditions ``` | eval Condition=mvindex(split(Condition,": "),1) ``` Second split the conditions into a multi-value field ``` | eval Condition=split(Condition,", ") ``` Now stats can count the occurrences of the conditions ``` | stats count by Condition  
i want the output in the below format :- Input as below:- host           sql instance           db name abc              sql1                          db1 abc               sql1                  ... See more...
i want the output in the below format :- Input as below:- host           sql instance           db name abc              sql1                          db1 abc               sql1                          db2 abc               sql2                           db123 abc               sql2                           db1234 xyz               xyzsql1                    db11 xyz                xyzsql2                   db321 xyz                xyzsql2                    db123 xyz                xyzsql2                    db1234 www             wwwsql1              db123 www            wwwsql1                db1234 outpu as below:- host           sql instance           db name abc              sql1                          db1                                                          db2  abc              sql2                        db123                                                          db1234 xyz               xyzsql1                    db11 xyz                xyzsql2                   db321                                                           db123                                                           db1234 www             wwwsql1              db123                                                           db1234
Hello, I just tried to remove the entire timestamp before the json data and it's work      But how can I remove the timestamp for all query with different timestamp ?    Regards,
We will try to check it  Thanks for advice
The letter Z at the end of 2023-09-30T04:59:59.000Z signifies Zulu time. (Zulu equals UTC for practical purposes.)  All you need to do is strptime(due_at, "%Y-%m-%d %H:%M:%S.%3N%Z").
Hello I just tried the rex but I still not json format, Do I need to export the query in json or something like that ?          Regards,
Sorry, missed the main rule: LINE_BREAKER = <\d+>\d{4}-
Hi @vijreddy30, are you speaking of HA using a single site clusters (even if machines are located in more sites) or DR using a multisite cluster? they are really different! in the first case you h... See more...
Hi @vijreddy30, are you speaking of HA using a single site clusters (even if machines are located in more sites) or DR using a multisite cluster? they are really different! in the first case you have to configure your Replication Factor and Search Factor so in one site you have a full searchable copy of your data. Anyway, in both cases you can see at https://docs.splunk.com/Documentation/Splunk/9.1.1/Indexer/Multisitedeploymentoverview how to configure the Indexer Cluster and at https://docs.splunk.com/Documentation/Splunk/9.1.1/DistSearch/AboutSHC how to configure a Search Head Cluster. The management servers (Cluster Master, Deployer, Deployment Server, License Master and Monitoring Console) , are unique in the Splunk Architecture and the DR site will work also without them (for a little period): they aren't a single Point of Failure. Eventually, you could consider to have a turned off copy of them in the secondary site. You can find a description of these architecture at https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf  Ciao. Giuseppe
(Data normalization is just to place data in diagonal tables for faster retrieval.) Anyway, the reason why data characteristics matter is because the cost of searching depends on them.  This is true ... See more...
(Data normalization is just to place data in diagonal tables for faster retrieval.) Anyway, the reason why data characteristics matter is because the cost of searching depends on them.  This is true in all relational data, whether it is SQL or SPL. "All from the same set of events" is too broad.  It can describe a set of diagonal events like field1 field2 x y z a b c f1v1   xv1 yv1 zv1         f2v1       av1 bv1 cv1 f1v2   xv2 yv2 zv2         f2v2       av2 bv2 cv2 But it fits just as well a set of fully populated events like field1 fiel2 x y z a b c f1v1 f2v1 xv1 yv1 zv1 av1 bv1 cv1 f1v2 f2v2 xv2 yv2 zv2 av2 bv2 cv2 For fully populated data, why not use this?   index=example | stats avg(field1) perc95(field2) by x,y,z a,b,c   For diagonal (sparse) data, this would speed things up:   index=example field1=* x=* y=* z=* | stats avg(field1) by x,y,z | append [ index=example field2=* a=* b=* c=* stats perc95(field2) by a,b,c ]   I suspect that you have a specific use case that you know about the data that are in between the extremes, and have some specific results in mind.  You are correct to say that this is data engineering because in Splunk, you are really designing your schema on the fly.  This is where Splunk shows its tremendous power. In traditional data engineering, you optimize your schema based on queries (analytics) you anticipate and data characteristics.  Same here.  You need to articulate data characteristics in order to optimize SPL.  There is no single "optimal" pattern.  Not in SQL, not in SPL. As you already realized, there is a good reason why Optimizing Searches emphasizes limiting number of events retrieved from the index.  If you append multiple subsearches that retrieves the same raw events from index, as some of your mock codes do, it naturally multiplies index-search cost.  When events are numerous, index-search cost can heavily affect total cost.  So, using filter in the first pipe is important.  But which filters can be applied relies heavily on data characteristics and the kind of analytics you perform.  The pattern you observed is very much a function of your actual data based on the stats you perform.
Hi team,   My project Zone  1  have Deployment server , HF and (SH+Indexer) Zone 2  also  Deployment server ,HF and (SH+Indexer)  and don't have cluster  master   My requirement is set up  High ... See more...
Hi team,   My project Zone  1  have Deployment server , HF and (SH+Indexer) Zone 2  also  Deployment server ,HF and (SH+Indexer)  and don't have cluster  master   My requirement is set up  High availability server configuretion Zone1 and zone 2  , I have plan to set  up zone 2 search +indexer server -->setting s  -->indexercluster  --> here i will give masternode is deploymentserver of zone1   because of i don't have cluster master in my project, please guide me my requirement.     Vijreddy
Hi @Utkc137, you have three solutions: use an rsyslog server to receive UDP traffic that writes the logs in a file that's read by a Forwarder, in this case it works also if Splunk is down, use SC... See more...
Hi @Utkc137, you have three solutions: use an rsyslog server to receive UDP traffic that writes the logs in a file that's read by a Forwarder, in this case it works also if Splunk is down, use SC4S app (https://splunkbase.splunk.com/app/4740), add to your inputs.conf the parameter persistentQueueSize (https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Inputsconf) using a large value. I don't like SC4S, so I hint to use both the the 1st and 3rd solutions. Ciao. Giuseppe
@tej57 is this correct   
Hi @adamsmith47, to send some logs to two indexers groups you have to follow the instructios at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Route_inputs_to_spec... See more...
Hi @adamsmith47, to send some logs to two indexers groups you have to follow the instructios at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input Ciao. Giuseppe
Hi @LearningGuy, did you tried the delete command (obviously enabling the can_delete role)? Use this command with very much attention and ta the rend disable the can_delete role from your account! ... See more...
Hi @LearningGuy, did you tried the delete command (obviously enabling the can_delete role)? Use this command with very much attention and ta the rend disable the can_delete role from your account! You should create a search to identify the events to delete and then add the delete command at the end. Ciao. Giuseppe
absolutely,  I was too beginner to grasp it!
replacing sendemail.py worked for me as well. linking related post: https://community.splunk.com/t5/Other-Usage/Why-is-Splunk-send-email-function-not-working-version-9-1-0-2/m-p/658209/highlight/fal... See more...
replacing sendemail.py worked for me as well. linking related post: https://community.splunk.com/t5/Other-Usage/Why-is-Splunk-send-email-function-not-working-version-9-1-0-2/m-p/658209/highlight/false#M1420
Hi @dhana22, it isn't possible to configure two License Master. If you need DR, you should have a turned off copy of the License Master in the Secondary site, aligned using systems as Dell Recovery... See more...
Hi @dhana22, it isn't possible to configure two License Master. If you need DR, you should have a turned off copy of the License Master in the Secondary site, aligned using systems as Dell Recovery Point (or similar from other providers) using the same hostname and eventually IP address. Anyway, if the License Master is down, the system continue to work, the only problem are warning messages. Ciao. Giuseppe
After upgrading to v91.1. I also ran into that issue, but only for Windows machines that had Splunk Enterprise installed. The Linux installations were not affected. I fixed it by replacing the ...\S... See more...
After upgrading to v91.1. I also ran into that issue, but only for Windows machines that had Splunk Enterprise installed. The Linux installations were not affected. I fixed it by replacing the ...\Splunk\etc\apps\search\bin\sendemail.py with an older version. Now I am getting integrity check errors, but e-mail alerts work fine.   There is another post that says this issue might be fixed in v9.1.2. Let's see.. https://community.splunk.com/t5/Splunk-Enterprise/What-is-happening-in-Splunk-Enterprise-V9-1-0-1/m-p/651297
Hi @VK18 , As I said, it isn't a best practice, but you can locate bothe roles in the same machine, adding more resources to the VM. About the bottleneck: yes they use different ports, but the netw... See more...
Hi @VK18 , As I said, it isn't a best practice, but you can locate bothe roles in the same machine, adding more resources to the VM. About the bottleneck: yes they use different ports, but the network interface it's the same. eventually, as you can read at https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Deploymentclientconf you could reduce the update frequency in the DS, changing the phoneHomeIntervalInSecs parameter in deploymentclient.conf (on forwarders) from the default (60 seconds) to 120 or 180 seconds or more, as you can read at https://community.splunk.com/t5/Deployment-Architecture/When-managing-large-numbers-of-deployment-clients-1000-what-is/m-p/37685  Ciao. Giuseppe