All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Rich, I'm asking for sure. 9997 port for sending data and we have indexer cluster structure. What should be the port opening direction? Somebody says "you should open 9997,8089 for DS" and I askin... See more...
Hi Rich, I'm asking for sure. 9997 port for sending data and we have indexer cluster structure. What should be the port opening direction? Somebody says "you should open 9997,8089 for DS" and I asking WHY?. Because we are doing like this but its not a answer. Do the logs go to DS first and then get written to the indexer? 10.10.10.1 UF 10.10.10.2 DS 10.10.10.3 indexer cluster LB 1.Senario: 10.10.10.1 UF --9997,8089--> 10.10.10.2 DS 2.Senario: 10.10.10.1 UF --9997--> 10.10.10.3 indexer cluster LB 10.10.10.1 UF --8089--> 10.10.10.2 DS
You need to add your index time configurations on the HF and not on the SH or Indexers since the HF is where your data is being parsed. Your TIME_PREFIX configuration could be simpler but we would n... See more...
You need to add your index time configurations on the HF and not on the SH or Indexers since the HF is where your data is being parsed. Your TIME_PREFIX configuration could be simpler but we would need to see a full sample log line to help with that; redact any sensitive information by the way if you provide a sample.
"level" is not a valid value for SOURCE_KEY.  Try _raw, instead. [infonull] SOURCE_KEY = _raw REGEX = "level":"info" DEST_KEY = queue FORMAT = nullQueue
Hi - the numbered list provided step by step instructions on searching for the items I mentioned. In addition, the lookup errors you see in the UI usually tells you the name of the lookup related con... See more...
Hi - the numbered list provided step by step instructions on searching for the items I mentioned. In addition, the lookup errors you see in the UI usually tells you the name of the lookup related configuration that's having problems. This doc page should help: https://docs.splunk.com/Documentation/Splunk/9.1.2/Knowledge/Aboutlookupsandfieldactions#Lookup_definitions
Hi! I received an event with the following time string:  2023-12-12T13:39:25.400399Z CEF:0..... This time is already in the correct timezone, but because of Z, splunk adds to 5 hours. I understan... See more...
Hi! I received an event with the following time string:  2023-12-12T13:39:25.400399Z CEF:0..... This time is already in the correct timezone, but because of Z, splunk adds to 5 hours. I understand that Z it is timezone indicator but how i can ignore it? Flow of this event is : Source --> HF --> Indexers. On HF or Indexers i dont have any props or transoforms settings. On Search Heads I extract a few fields from this event and it works. But i can't to extract this time correctly without Z. I put the following regex inside props.conf on my SHs. Also i tried to put this on indexer's props.conf:   TIME_PREFIX = ^\d{2,4}-\d{1,2}-\d{1,2}T\d{1,2}:\d{1,2}:\d{1,2}\.\d{1,6}    I tried to add TZ or TZ_ALIAS inside props.conf, but no effect. Where can I be wrong? Thanks
I am not sure it is possible with Dashboard Studio (which I assume you are using since you mentioned JSON). With Classic / SimpleXML dashboards, you can use a change handler on the dropdown to set ad... See more...
I am not sure it is possible with Dashboard Studio (which I assume you are using since you mentioned JSON). With Classic / SimpleXML dashboards, you can use a change handler on the dropdown to set additional tokens based on the choice made.
Hi, I am facing issue with "no recent logs found for the sourcetype =abc:xyz (example) and index=pqr (example) after 25th November Like we are able to see the logs till 25th of Nov So, please ... See more...
Hi, I am facing issue with "no recent logs found for the sourcetype =abc:xyz (example) and index=pqr (example) after 25th November Like we are able to see the logs till 25th of Nov So, please guide how to check it would be helpful. Thanks, Pooja
Thank you for your response. I'm using explicitly the index="oracle" as an example to confirm when the search works fine for an index event. When I use the same search for Metrics index (replacing in... See more...
Thank you for your response. I'm using explicitly the index="oracle" as an example to confirm when the search works fine for an index event. When I use the same search for Metrics index (replacing index="oracle" by index="murex_metrics")   it doesn't work knowing that we have existent dashboards using this metrics index. here the  example of a metrics index search   | rest splunk_server="local" "/servicesNS/-/-/data/ui/views" | search "eai:data"="*index=murex_metrics*"    thanks
Hi @parthiban , you have only to setup the conditions for the alert: <your_search> | stats count(eval(status="offline")) AS offline_count count(eval(status="online")) AS online_count earl... See more...
Hi @parthiban , you have only to setup the conditions for the alert: <your_search> | stats count(eval(status="offline")) AS offline_count count(eval(status="online")) AS online_count earliest(eval(if(status="offline",_time,""))) AS offline earliest(eval(if(status="online",_time,""))) AS online | fillnull value=0 offline_count | fillnull value=0 online_count | eval condition=case( offline_count=0 AND online_count>0,"Online", offline_count>0 AND online_count=0,"Offline", offline_count>0 AND online_count>0 AND online>offline, "Offline but newly online"), offline_count>0 AND online_count>0 AND online>offline, "Offline"), offline_count=0 AND online_count=0, "No data") | search condition="Offline" OR condition="Offline but newly online" | table condition in this way your alert will trigger the two conditions. Ciao. Giuseppe  
Hi @abhi04 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hello! A team at my organization is concerned with MongoDB 4.2 running on my splunk  hosts and want me to create a plan to upgrade them to 6.0 at a minimum. From what I've read it seems like this ... See more...
Hello! A team at my organization is concerned with MongoDB 4.2 running on my splunk  hosts and want me to create a plan to upgrade them to 6.0 at a minimum. From what I've read it seems like this is either not possible or a bad idea due to possible modifications that have been done by splunk. Is there a documented way to upgrade to MongoDB 6.0 or newer? Thanks.
You can always remove extra fields when they are no longer needed. In this case it's as simple as | fields - count While it's relatively easy to transform the output from my search to something res... See more...
You can always remove extra fields when they are no longer needed. In this case it's as simple as | fields - count While it's relatively easy to transform the output from my search to something resembling "your output" - you just add | stats list(Domain) as Domain list(domaincount) as "Domain Count" by Workstation User You can of course reorder the results with the "fields" command at the end to move some to the left, others to the right. There is one catch - you get two separate multivalued fields - Domain and Domain Count. The "problem" with Splunk is that these are two separate fields and there is no connection between them. So if you wanted to - for example sort one of them and sort the other one accordingly - there's no way to do it unless you zip them together into a list of single values, sort them and then split them back. Ugly. Also if by any chance your search leading to the "stats list()" resulted in partially empty results, you wouldn't get multivalued fields with "holes" in them as the result.  
Sample example logs: {"timestamp":"2023-12-12T15:27:22.890Z","shortmessage":"(abc): def ghi","level":"info","source":"xyz,"file":"/home/abc/def.txt","line":144}
There are two separate things: One is an indexer cluster - oversimplifying a bit it's just a bunch of indexers between which the buckets might be replicated (but don't have to, I've seen clusters wi... See more...
There are two separate things: One is an indexer cluster - oversimplifying a bit it's just a bunch of indexers between which the buckets might be replicated (but don't have to, I've seen clusters with RF=1; it didn't give you HA but had its pros) managed by a CM (possibly redundant in active-passive mode). The single cluster might be "stretched" across several different sites but you still need direct communication between the sites because of management traffic between CM and indexers in all sites and replication traffic between indexers themselves (again - you probably can configure multisite cluster and contain all buckets within a single site but it doesn't make much sense). Another thing is distributed search - you can have several separate indexers or clusters and have a search head (or search head cluster) searching across all your indexers or clusters. There is also another, even more kinky way of searching - federated search - where SH searches not directly from indexers but also from another SH. But let's leave it aside for now. So depending on your business needs and technical constraints you might need one or another architecture. If you have one cluster, the whole cluste has just one CM (possibly with a redundant instance). There's no "splitting cluster among several CMs". Period. So you either need one big cluster or several smaller ones (but again - separate clusters, not one big cluster with serveral smaller CMs - there's no such thing). Which one will be appropriate in your case? That's something you should discuss with a skilled Splunk Architect - that's what you typically engage either Splunk PS or your friendly local Splunk Partner for.
Hi, I am trying to ignore the logs that have level info and want to send them to null queue: example logs (not including the befor eand after pattern of the logs but its a json format and this is on... See more...
Hi, I am trying to ignore the logs that have level info and want to send them to null queue: example logs (not including the befor eand after pattern of the logs but its a json format and this is one of the fields):  "level":"info",   I have tried below and it does not work, can someone help if this is correct or is there another way, the below is in heavy forwarder props: [abc] TRANSFORMS-null = infonull   transforms [infonull] SOURCE_KEY = level REGEX = info DEST_KEY = queue FORMAT = nullQueue
Thanks @richgalloway  and @gcusello 
Thank you all, so, we have the concept of regions, and our Splunk architecture revolves around it. So, let’s say the European one - it has the all the Splunk data of Europe in the European indexer cl... See more...
Thank you all, so, we have the concept of regions, and our Splunk architecture revolves around it. So, let’s say the European one - it has the all the Splunk data of Europe in the European indexer cluster and because of that I asked the question, whether each region should have its own cluster master or they can share. If they share, how can I figure out how many buckets the cluster handles? So, we won’t reach the one million ..
Hi @gcusello  No, don't want cont alert for offline... I want to trigger first offline and first online message. Thanks for understanding.
Hi @parthiban , it isn't a problem notification when status is offline but, after the first offline, do you want that the alert continues to fire "offline", or do you want a message when it comes ba... See more...
Hi @parthiban , it isn't a problem notification when status is offline but, after the first offline, do you want that the alert continues to fire "offline", or do you want a message when it comes back on line?  if you want a message every time you have offline and the following online, you could try something like this: <your_search> | stats count(eval(status="offline")) AS offline_count count(eval(status="online")) AS online_count earliest(eval(if(status="offline",_time,""))) AS offline earliest(eval(if(status="online",_time,""))) AS online | fillnull value=0 offline_count | fillnull value=0 online_count | eval condition=case( offline_count=0 AND online_count>0,"Online", offline_count>0 AND online_count=0,"Offline", offline_count>0 AND online_count>0 AND online>offline, "Offline but newly online"), offline_count>0 AND online_count>0 AND online>offline, "Offline"), offline_count=0 AND online_count=0, "No data") | table condition in this way you can choose the conditions to trigger the alert. Ciao. Giuseppe
@richgalloway Below is SPL  used, index="*****" host="sclp*" source="*****" "BOLT_ARIBA_ERROR_DETAILS:" "1-57d28402-9058-11ee-83b7-021a6f9d1f1c" "5bda7ec9" | rex "(?ms)BOLT_ARIBA_ERROR_DETAILS: (?<... See more...
@richgalloway Below is SPL  used, index="*****" host="sclp*" source="*****" "BOLT_ARIBA_ERROR_DETAILS:" "1-57d28402-9058-11ee-83b7-021a6f9d1f1c" "5bda7ec9" | rex "(?ms)BOLT_ARIBA_ERROR_DETAILS: (?<details>\[.*\])" | spath input=details output=ERROR_MESSAGE path={}.ERROR_MESSAGE | spath input=details output=PO_NUMBER path={}.PO_NUMBER | spath input=details output=MW_ERROR_CODE path={}.MW_ERROR_CODE | spath input=details output=INVOICE_ID path={}.INVOICE_ID | spath input=details output=MSG_GUID path={}.MSG_GUID | spath input=details output=INVOICE_NUMBER path={}.INVOICE_NUMBER | spath input=details output=UUID path={}.UUID | spath input=details output=DB_TIMESTAMP path={}.DB_TIMESTAMP | table ERROR_MESSAGE PO_NUMBER MW_ERROR_CODE INVOICE_ID MSG_GUID INVOICE_NUMBER UUID DB_TIMESTAMP