All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks, I would appreciate it  if you stepped back from this : I will see if anyone else in the community has an idea / understands what I am saying   Have a great day Rick
What I am trying to write is some SPL code that will identify log events that only have a "Starting" event with no "Completed" event.  By a specific Job Name extracted from each log event that are in... See more...
What I am trying to write is some SPL code that will identify log events that only have a "Starting" event with no "Completed" event.  By a specific Job Name extracted from each log event that are in the same index & sourcetype ? A Job is still 'running' if it only has a "Start" event with no "Completed" event. If my starting query is: index=anIndex sourcetype=aSourcetype (jobName1 OR jobName2 OR jobName3) AND "Starting" | rex field=_raw "Batch::(?<aJobName1>[^\s]*)" | stats count AS aCount1 by aJobName1 Then I only want to keep log events that have no "Completed" event from the same index and sourcetype: index=anIndex sourcetype=aSourcetype (jobName1 OR jobName2 OR jobName3) AND "Completed" | rex field=_raw "Batch::(?<aJobName2>[^\s]*)" | stats count AS aCount2 by aJobName2 I have tried using: where isnull(aCount2) but I used appendcols but stats is removing _raw data ? for the rest of my code... How would I go about just getting those log events (_raw) for jobs that are only "Started" I might be overthinking this but am struggling...
Ok, have it your way - don't give more details. Have two values and chart them across time. What do you want to chart? The same value through whole time period? Be my guest. It makes no sense but you... See more...
Ok, have it your way - don't give more details. Have two values and chart them across time. What do you want to chart? The same value through whole time period? Be my guest. It makes no sense but you apparently know better. But then again - why asking for help in the first place?
Well... there are two possible approaches to migration of such environment. First is as you want to do it - swap "one for one" leaving the same addresses, names and so on. You might get away with mo... See more...
Well... there are two possible approaches to migration of such environment. First is as you want to do it - swap "one for one" leaving the same addresses, names and so on. You might get away with moving whole splunk installation from one server to another and pretending nothing changed but that might be tricky depending on your data layout and - you don't have much room for error - you replace the machine and it must be working perfectly OK. Otherwise it's very hard to diagnose/fix. Another way, at least with some components (clustered indexers, clustered search heads, possibly HFs) would be to deploy new component, add it to environment, migrate data if applicable, decomission old one.
I do not believe you need to know about the specifics of the search .. I have 2 searches returning numerical values as per the stats command this could be any search on any data, I am subtracting one... See more...
I do not believe you need to know about the specifics of the search .. I have 2 searches returning numerical values as per the stats command this could be any search on any data, I am subtracting one from the other and want to graph that value against time. 
But your search shows just two data points. Without more details on your data it's impossible to help you.
Yes. That's how it works - values(whatever) creates just one so-called multivalued field with a list of possible values of given field. The fild is a "standalone being" - if you have two multivalued ... See more...
Yes. That's how it works - values(whatever) creates just one so-called multivalued field with a list of possible values of given field. The fild is a "standalone being" - if you have two multivalued fields, they are not connected with each other in any way. You need to either combine both values prior to statsing | eval destipdomain =dest_ip.":"dest_domain | stats values(destipdomain) by src_ip Then if you need  you'll have to split the value by the colon character. Alternative approach would be to stats by more fields. | stats values(dest_domain) by src_ip dest_ip  
Timechart the difference against time...  The specific use case is in itself around logging I have a third party SaaS provider send logs to our GCP SPLUNK over the internet, issue is they are intermi... See more...
Timechart the difference against time...  The specific use case is in itself around logging I have a third party SaaS provider send logs to our GCP SPLUNK over the internet, issue is they are intermittently and significantly duplicating individual log entries due to something in the way they are forwarding so I want to chart this to have an artefact I can point at for analysis.
What would you want to timechart here as you have only two values? This makes no sense.
How did you go with this? I'm facing the same issue.
How do I fix replication bundle on Splunk Cloud SH?
Hi @Priyaranjan.Behera, I did some searching and I found this info, but I don't think it's related to your issue, but wanted to share just in case. javaagent should be instrumented to the local ... See more...
Hi @Priyaranjan.Behera, I did some searching and I found this info, but I don't think it's related to your issue, but wanted to share just in case. javaagent should be instrumented to the local Java application along with the java agent API. Since you didn't instrument the java process with the Java agent, you are getting that exception. So, can you please try to instrument the java process with the javaagent along with the javaagent API and see how it goes?
Hi @Taj.Hassan  I'm going to be sending you a Community Private Message to ask for some sensitive info. Please respond there. 
Dear SPLUNKos I need to create a time chart as per the below Run one “grand total” search Run second search which is a dedup of the first search. Subtract the difference and timechart only the d... See more...
Dear SPLUNKos I need to create a time chart as per the below Run one “grand total” search Run second search which is a dedup of the first search. Subtract the difference and timechart only the difference. I have got to the point below which gives me a table of data but I cannot get this to chart : Mr SPLUNK in my organisation tells me this cannot be done which is  borne out by the documentation on the timechart command which indicates it can only reference field data not calculated data . Is there a way? <SEARCH-GRANDTOTAL> | stats count as Grandtotal | appendcols [ <SEARCH-2> | stats count as TotalDeDup ] | eval diff= Grandtotal - TotalDeDup
I have a query that gets a list of destination ips per source ip. I also want to add a column for the associated domain name per destination ip. The query I have to get destination ips per source ip ... See more...
I have a query that gets a list of destination ips per source ip. I also want to add a column for the associated domain name per destination ip. The query I have to get destination ips per source ip is:      index=network | stats values(dest_ip) by src_ip     I am not wanting to use eval to combine the values of dest_ip and domain into one field, and I tried mvappend but I am unable to achieve the result I want.  I tried |stats values(dest_ip) values(domain) by src_ip, but the dest_ip and domain columns appear to be independent of each other. What I am looking for is below:  src_ip domain_ips domain I just need the domain name to be "connected" with the domain_ip
Can this app please be updated to make it cloud compatible, as well as to show it's compatible with v9 of splunk? There's no reason I can see it can't be, other than it just needing a quick update of... See more...
Can this app please be updated to make it cloud compatible, as well as to show it's compatible with v9 of splunk? There's no reason I can see it can't be, other than it just needing a quick update of the config. I haven't run appinspect on it yet, though, so possibly that is what is stopping this.
Make sure the Nagio index contains a field called "host_name".  If it does not, then change the rename command to make the Server_name field match a field name in the index.
I have a distributed deployment at version 9.0.4.1 Everything in running on RHEL 7 and the system/server team does not want to do in place upgrades to RHEL 9.  I have been tasked to migrate each nod... See more...
I have a distributed deployment at version 9.0.4.1 Everything in running on RHEL 7 and the system/server team does not want to do in place upgrades to RHEL 9.  I have been tasked to migrate each node to a new replacement server (which will be renamed / IP- addressed to match the existing).   From what I have read this is possible, but I have a few questions. Lets consider I start with standalone nodes, like a SHC-deployer, Monitoring Console, License Manager... These are the general steps I have gathered 1 Install Splunk (same version) on the new server 2 Stop Splunk on the old server  3 Copy old configs to new server ?? <<< which configs? is there a check list documented somewhere 4 Start new Splunk server and verify  I could go thru each directory copying configs, but any advice to expedite this step is appreciated. Thank you
The first event that came in doesnt have a timestamp which is the reason for the error but the other events are extracted properly
Hello, I use Microsoft's Visual Studio Code as code locker for my spl, xml, and json Splunk code. Does anyone have  experience running spl code from VSC? I have the Live Server extension installed a... See more...
Hello, I use Microsoft's Visual Studio Code as code locker for my spl, xml, and json Splunk code. Does anyone have  experience running spl code from VSC? I have the Live Server extension installed and enabled. However, it opens into directory listing within Chrome. When I drilldown to the spl file instead of running the code it downloads the file. Thanks and God bless, Genesius