All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello I just tried the rex but I still not json format, Do I need to export the query in json or something like that ?          Regards,
Sorry, missed the main rule: LINE_BREAKER = <\d+>\d{4}-
Hi @vijreddy30, are you speaking of HA using a single site clusters (even if machines are located in more sites) or DR using a multisite cluster? they are really different! in the first case you h... See more...
Hi @vijreddy30, are you speaking of HA using a single site clusters (even if machines are located in more sites) or DR using a multisite cluster? they are really different! in the first case you have to configure your Replication Factor and Search Factor so in one site you have a full searchable copy of your data. Anyway, in both cases you can see at https://docs.splunk.com/Documentation/Splunk/9.1.1/Indexer/Multisitedeploymentoverview how to configure the Indexer Cluster and at https://docs.splunk.com/Documentation/Splunk/9.1.1/DistSearch/AboutSHC how to configure a Search Head Cluster. The management servers (Cluster Master, Deployer, Deployment Server, License Master and Monitoring Console) , are unique in the Splunk Architecture and the DR site will work also without them (for a little period): they aren't a single Point of Failure. Eventually, you could consider to have a turned off copy of them in the secondary site. You can find a description of these architecture at https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf  Ciao. Giuseppe
(Data normalization is just to place data in diagonal tables for faster retrieval.) Anyway, the reason why data characteristics matter is because the cost of searching depends on them.  This is true ... See more...
(Data normalization is just to place data in diagonal tables for faster retrieval.) Anyway, the reason why data characteristics matter is because the cost of searching depends on them.  This is true in all relational data, whether it is SQL or SPL. "All from the same set of events" is too broad.  It can describe a set of diagonal events like field1 field2 x y z a b c f1v1   xv1 yv1 zv1         f2v1       av1 bv1 cv1 f1v2   xv2 yv2 zv2         f2v2       av2 bv2 cv2 But it fits just as well a set of fully populated events like field1 fiel2 x y z a b c f1v1 f2v1 xv1 yv1 zv1 av1 bv1 cv1 f1v2 f2v2 xv2 yv2 zv2 av2 bv2 cv2 For fully populated data, why not use this?   index=example | stats avg(field1) perc95(field2) by x,y,z a,b,c   For diagonal (sparse) data, this would speed things up:   index=example field1=* x=* y=* z=* | stats avg(field1) by x,y,z | append [ index=example field2=* a=* b=* c=* stats perc95(field2) by a,b,c ]   I suspect that you have a specific use case that you know about the data that are in between the extremes, and have some specific results in mind.  You are correct to say that this is data engineering because in Splunk, you are really designing your schema on the fly.  This is where Splunk shows its tremendous power. In traditional data engineering, you optimize your schema based on queries (analytics) you anticipate and data characteristics.  Same here.  You need to articulate data characteristics in order to optimize SPL.  There is no single "optimal" pattern.  Not in SQL, not in SPL. As you already realized, there is a good reason why Optimizing Searches emphasizes limiting number of events retrieved from the index.  If you append multiple subsearches that retrieves the same raw events from index, as some of your mock codes do, it naturally multiplies index-search cost.  When events are numerous, index-search cost can heavily affect total cost.  So, using filter in the first pipe is important.  But which filters can be applied relies heavily on data characteristics and the kind of analytics you perform.  The pattern you observed is very much a function of your actual data based on the stats you perform.
Hi team,   My project Zone  1  have Deployment server , HF and (SH+Indexer) Zone 2  also  Deployment server ,HF and (SH+Indexer)  and don't have cluster  master   My requirement is set up  High ... See more...
Hi team,   My project Zone  1  have Deployment server , HF and (SH+Indexer) Zone 2  also  Deployment server ,HF and (SH+Indexer)  and don't have cluster  master   My requirement is set up  High availability server configuretion Zone1 and zone 2  , I have plan to set  up zone 2 search +indexer server -->setting s  -->indexercluster  --> here i will give masternode is deploymentserver of zone1   because of i don't have cluster master in my project, please guide me my requirement.     Vijreddy
Hi @Utkc137, you have three solutions: use an rsyslog server to receive UDP traffic that writes the logs in a file that's read by a Forwarder, in this case it works also if Splunk is down, use SC... See more...
Hi @Utkc137, you have three solutions: use an rsyslog server to receive UDP traffic that writes the logs in a file that's read by a Forwarder, in this case it works also if Splunk is down, use SC4S app (https://splunkbase.splunk.com/app/4740), add to your inputs.conf the parameter persistentQueueSize (https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Inputsconf) using a large value. I don't like SC4S, so I hint to use both the the 1st and 3rd solutions. Ciao. Giuseppe
@tej57 is this correct   
Hi @adamsmith47, to send some logs to two indexers groups you have to follow the instructios at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Route_inputs_to_spec... See more...
Hi @adamsmith47, to send some logs to two indexers groups you have to follow the instructios at https://docs.splunk.com/Documentation/Splunk/9.1.1/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input Ciao. Giuseppe
Hi @LearningGuy, did you tried the delete command (obviously enabling the can_delete role)? Use this command with very much attention and ta the rend disable the can_delete role from your account! ... See more...
Hi @LearningGuy, did you tried the delete command (obviously enabling the can_delete role)? Use this command with very much attention and ta the rend disable the can_delete role from your account! You should create a search to identify the events to delete and then add the delete command at the end. Ciao. Giuseppe
absolutely,  I was too beginner to grasp it!
replacing sendemail.py worked for me as well. linking related post: https://community.splunk.com/t5/Other-Usage/Why-is-Splunk-send-email-function-not-working-version-9-1-0-2/m-p/658209/highlight/fal... See more...
replacing sendemail.py worked for me as well. linking related post: https://community.splunk.com/t5/Other-Usage/Why-is-Splunk-send-email-function-not-working-version-9-1-0-2/m-p/658209/highlight/false#M1420
Hi @dhana22, it isn't possible to configure two License Master. If you need DR, you should have a turned off copy of the License Master in the Secondary site, aligned using systems as Dell Recovery... See more...
Hi @dhana22, it isn't possible to configure two License Master. If you need DR, you should have a turned off copy of the License Master in the Secondary site, aligned using systems as Dell Recovery Point (or similar from other providers) using the same hostname and eventually IP address. Anyway, if the License Master is down, the system continue to work, the only problem are warning messages. Ciao. Giuseppe
After upgrading to v91.1. I also ran into that issue, but only for Windows machines that had Splunk Enterprise installed. The Linux installations were not affected. I fixed it by replacing the ...\S... See more...
After upgrading to v91.1. I also ran into that issue, but only for Windows machines that had Splunk Enterprise installed. The Linux installations were not affected. I fixed it by replacing the ...\Splunk\etc\apps\search\bin\sendemail.py with an older version. Now I am getting integrity check errors, but e-mail alerts work fine.   There is another post that says this issue might be fixed in v9.1.2. Let's see.. https://community.splunk.com/t5/Splunk-Enterprise/What-is-happening-in-Splunk-Enterprise-V9-1-0-1/m-p/651297
Hi @VK18 , As I said, it isn't a best practice, but you can locate bothe roles in the same machine, adding more resources to the VM. About the bottleneck: yes they use different ports, but the netw... See more...
Hi @VK18 , As I said, it isn't a best practice, but you can locate bothe roles in the same machine, adding more resources to the VM. About the bottleneck: yes they use different ports, but the network interface it's the same. eventually, as you can read at https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Deploymentclientconf you could reduce the update frequency in the DS, changing the phoneHomeIntervalInSecs parameter in deploymentclient.conf (on forwarders) from the default (60 seconds) to 120 or 180 seconds or more, as you can read at https://community.splunk.com/t5/Deployment-Architecture/When-managing-large-numbers-of-deployment-clients-1000-what-is/m-p/37685  Ciao. Giuseppe
Isn't this exactly what I posted in https://community.splunk.com/t5/Splunk-Search/conditional-count-eval/m-p/665764/highlight/true#M228420?  Splunk doesn't really store boolean values.
Hi @danspav ,     Thanks for your response!     It works, Thanks for your brief and clear explanation. It means a lots. Thanks! Manoj Kumar S
I figured out the minor error was - "True" needs to be "true", as the value returns a boolean.
This is even more confusing.  Are you saying that you need to find users that only have one event code 4766 during the period?  All you need to do is index="xx" | stats values(EventCode) as EventCo... See more...
This is even more confusing.  Are you saying that you need to find users that only have one event code 4766 during the period?  All you need to do is index="xx" | stats values(EventCode) as EventCode | where mvcount(EventCode) == 1 AND EventCode == 4766  
Hi @Lax, grouping by Condition is easy, you have to use the stats command. <your_search> | stats count BY Condition The real question is how do you have there values in Condition field: in every e... See more...
Hi @Lax, grouping by Condition is easy, you have to use the stats command. <your_search> | stats count BY Condition The real question is how do you have there values in Condition field: in every event there's only one value or more values, if more values, how they are grouped (in the event), are they in json format? I could be more detailed if you could share some sample of your logs. Ciao. Giuseppe