All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

https://docs.splunk.com/Documentation/Splunk/latest/Admin/Transformsconf#KEYS: SOURCE_KEY = MetaData:Source BTW, you don't need fields.conf on the HF. You need it on SH.
There is no posibility that you have two identically named files in one directory. Maybe one has a typo in its name, maybe the case of the letters in the name is wrong. We don't know but check again.... See more...
There is no posibility that you have two identically named files in one directory. Maybe one has a typo in its name, maybe the case of the letters in the name is wrong. We don't know but check again. (Hint - wrong named file won't be used and values in it won't get encrypted on first read).
Yep. You're overthinking it a bit. Either you have a field containing the job state (Starting/Completed) or you can create one by | eval state=case(searchmatch("Starting",_raw),"Starting",searchmatc... See more...
Yep. You're overthinking it a bit. Either you have a field containing the job state (Starting/Completed) or you can create one by | eval state=case(searchmatch("Starting",_raw),"Starting",searchmatch("Completed"),"Completed",1=1,null()) Then you need to check the state for each separate job | stats values(state) as states by whatever_id_you_have_for_each_job (If you want to retain the jobname, which I assume is a more general clasifier than a single job identifier, add values(aJobName) to that stats command. Then you can filter to see only non-finished jobs by | where NOT states="Completed" Keep in mind that matching multivalued fields can be a bit unintuitive at first.
You can try to align the _time field with bin command and then match events by exactly the same value of that field (you can leave the original value for reference of course). Or you can use the tra... See more...
You can try to align the _time field with bin command and then match events by exactly the same value of that field (you can leave the original value for reference of course). Or you can use the transaction command (generally, transaction should be avoided since it's relatively resource intensive and has its limitations but sometimes it's the only reasonable solution).
@kiran_panchavatPlease stop spreading misinformation (especially created by generative language models). The summaryindex command is an alias for the collect command. There is absolutely no differen... See more...
@kiran_panchavatPlease stop spreading misinformation (especially created by generative language models). The summaryindex command is an alias for the collect command. There is absolutely no difference in behaviour of those two commands since they're the same command which can be called with either name. This is just my speculation but I suspect the command was originally called summaryindex because it was meant to collect data for summary indexing but was later "generalized" to the "collect" name which is the current command name in docs and the "summaryindex" command name was retained for backward compatibility reasons.
Thanks, I would appreciate it  if you stepped back from this : I will see if anyone else in the community has an idea / understands what I am saying   Have a great day Rick
What I am trying to write is some SPL code that will identify log events that only have a "Starting" event with no "Completed" event.  By a specific Job Name extracted from each log event that are in... See more...
What I am trying to write is some SPL code that will identify log events that only have a "Starting" event with no "Completed" event.  By a specific Job Name extracted from each log event that are in the same index & sourcetype ? A Job is still 'running' if it only has a "Start" event with no "Completed" event. If my starting query is: index=anIndex sourcetype=aSourcetype (jobName1 OR jobName2 OR jobName3) AND "Starting" | rex field=_raw "Batch::(?<aJobName1>[^\s]*)" | stats count AS aCount1 by aJobName1 Then I only want to keep log events that have no "Completed" event from the same index and sourcetype: index=anIndex sourcetype=aSourcetype (jobName1 OR jobName2 OR jobName3) AND "Completed" | rex field=_raw "Batch::(?<aJobName2>[^\s]*)" | stats count AS aCount2 by aJobName2 I have tried using: where isnull(aCount2) but I used appendcols but stats is removing _raw data ? for the rest of my code... How would I go about just getting those log events (_raw) for jobs that are only "Started" I might be overthinking this but am struggling...
Ok, have it your way - don't give more details. Have two values and chart them across time. What do you want to chart? The same value through whole time period? Be my guest. It makes no sense but you... See more...
Ok, have it your way - don't give more details. Have two values and chart them across time. What do you want to chart? The same value through whole time period? Be my guest. It makes no sense but you apparently know better. But then again - why asking for help in the first place?
Well... there are two possible approaches to migration of such environment. First is as you want to do it - swap "one for one" leaving the same addresses, names and so on. You might get away with mo... See more...
Well... there are two possible approaches to migration of such environment. First is as you want to do it - swap "one for one" leaving the same addresses, names and so on. You might get away with moving whole splunk installation from one server to another and pretending nothing changed but that might be tricky depending on your data layout and - you don't have much room for error - you replace the machine and it must be working perfectly OK. Otherwise it's very hard to diagnose/fix. Another way, at least with some components (clustered indexers, clustered search heads, possibly HFs) would be to deploy new component, add it to environment, migrate data if applicable, decomission old one.
I do not believe you need to know about the specifics of the search .. I have 2 searches returning numerical values as per the stats command this could be any search on any data, I am subtracting one... See more...
I do not believe you need to know about the specifics of the search .. I have 2 searches returning numerical values as per the stats command this could be any search on any data, I am subtracting one from the other and want to graph that value against time. 
But your search shows just two data points. Without more details on your data it's impossible to help you.
Yes. That's how it works - values(whatever) creates just one so-called multivalued field with a list of possible values of given field. The fild is a "standalone being" - if you have two multivalued ... See more...
Yes. That's how it works - values(whatever) creates just one so-called multivalued field with a list of possible values of given field. The fild is a "standalone being" - if you have two multivalued fields, they are not connected with each other in any way. You need to either combine both values prior to statsing | eval destipdomain =dest_ip.":"dest_domain | stats values(destipdomain) by src_ip Then if you need  you'll have to split the value by the colon character. Alternative approach would be to stats by more fields. | stats values(dest_domain) by src_ip dest_ip  
Timechart the difference against time...  The specific use case is in itself around logging I have a third party SaaS provider send logs to our GCP SPLUNK over the internet, issue is they are intermi... See more...
Timechart the difference against time...  The specific use case is in itself around logging I have a third party SaaS provider send logs to our GCP SPLUNK over the internet, issue is they are intermittently and significantly duplicating individual log entries due to something in the way they are forwarding so I want to chart this to have an artefact I can point at for analysis.
What would you want to timechart here as you have only two values? This makes no sense.
How did you go with this? I'm facing the same issue.
How do I fix replication bundle on Splunk Cloud SH?
Hi @Priyaranjan.Behera, I did some searching and I found this info, but I don't think it's related to your issue, but wanted to share just in case. javaagent should be instrumented to the local ... See more...
Hi @Priyaranjan.Behera, I did some searching and I found this info, but I don't think it's related to your issue, but wanted to share just in case. javaagent should be instrumented to the local Java application along with the java agent API. Since you didn't instrument the java process with the Java agent, you are getting that exception. So, can you please try to instrument the java process with the javaagent along with the javaagent API and see how it goes?
Hi @Taj.Hassan  I'm going to be sending you a Community Private Message to ask for some sensitive info. Please respond there. 
Dear SPLUNKos I need to create a time chart as per the below Run one “grand total” search Run second search which is a dedup of the first search. Subtract the difference and timechart only the d... See more...
Dear SPLUNKos I need to create a time chart as per the below Run one “grand total” search Run second search which is a dedup of the first search. Subtract the difference and timechart only the difference. I have got to the point below which gives me a table of data but I cannot get this to chart : Mr SPLUNK in my organisation tells me this cannot be done which is  borne out by the documentation on the timechart command which indicates it can only reference field data not calculated data . Is there a way? <SEARCH-GRANDTOTAL> | stats count as Grandtotal | appendcols [ <SEARCH-2> | stats count as TotalDeDup ] | eval diff= Grandtotal - TotalDeDup
I have a query that gets a list of destination ips per source ip. I also want to add a column for the associated domain name per destination ip. The query I have to get destination ips per source ip ... See more...
I have a query that gets a list of destination ips per source ip. I also want to add a column for the associated domain name per destination ip. The query I have to get destination ips per source ip is:      index=network | stats values(dest_ip) by src_ip     I am not wanting to use eval to combine the values of dest_ip and domain into one field, and I tried mvappend but I am unable to achieve the result I want.  I tried |stats values(dest_ip) values(domain) by src_ip, but the dest_ip and domain columns appear to be independent of each other. What I am looking for is below:  src_ip domain_ips domain I just need the domain name to be "connected" with the domain_ip