All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @din98  the 'duration' field from the transaction command sounds like what you're looking for. Do the jobs each have a unique ID field you could run this on?  | transaction <ID>  
Assuming request.path is a field name, you are looking for  | eval action=case(like('request.path',"auth/ldap/login/names"),"success")  
As I said, it only means that you didn't set up wildcard matching correctly.  Check your lookup setup.
Hi Team, How to write a calculated field for below  | eval action=case(like("request.path","auth/ldap/login/names"),"success") Names field will be changeing Above one is not working
That's interesting and a little bit odd, because the docs don't really make it clear that these are available to the running search because if you take the metadata tokens and make a search like this... See more...
That's interesting and a little bit odd, because the docs don't really make it clear that these are available to the running search because if you take the metadata tokens and make a search like this | makeresults | fields - _time | eval Name_Of_Search="$name$" | eval a="$action.email.hostname$" | eval b="$action.email.priority$" | eval c="$alert.expires$" | eval d="$alert.severity$" | eval e="$app$" | eval f="$cron_schedule$" | eval g="$description$" | eval h="$name$" | eval i="$next_scheduled_time$" | eval j="$owner$" | eval k="$results_link$" | eval m="$trigger_date$" | eval n="$trigger_time$" | eval o="$type$" | eval p="$view_link$" | transpose 0 and schedule the search, the output is this column row 1 Name_Of_Search SS_TEST a $action.email.hostname$ b $action.email.priority$ c 24h d 3 e $app$ f * * * * * g Saved Search Description h SS_TEST i $next_scheduled_time$ j $owner$ k $results_link$ m $trigger_date$ n $trigger_time$ o $type$ p $view_link$ indicating that not all of these metadata tokens are available ($search$ does work but breaks the running search do I didn't include it) Still my use case was always getting hold of name.
Is the current Symantec Bluecoat TA (v3.8.1) compatible with SGOS v7.3.16.2?  Has anyone got this to work and provide some insight? After our proxies admins upgraded from 6.7.x to 7.x all the field... See more...
Is the current Symantec Bluecoat TA (v3.8.1) compatible with SGOS v7.3.16.2?  Has anyone got this to work and provide some insight? After our proxies admins upgraded from 6.7.x to 7.x all the field extractions have ceased to work.   The release notes says it compatible up to 7.3.6.1.  Is their an updated TA that we are not aware of? https://docs.splunk.com/Documentation/AddOns/released/BlueCoatProxySG/Releasenotes Thanks.
Number 2 does not do sorting in the table itself, that is simply used as the base search in the dashboard to drive the sorting of the visualisation panels, which is what I understood you wanted to do... See more...
Number 2 does not do sorting in the table itself, that is simply used as the base search in the dashboard to drive the sorting of the visualisation panels, which is what I understood you wanted to do.  There is no practical column limit to the prefix solution, you just need to make the prefix fit the requirement, i.e. change the | eval name=... to | eval name=printf("_%02d_%s", c, column) and you will have a sortable 01_xxx 02_yyy syntax. As for a subsearch, the problem you face is that generally a subsearch runs BEFORE the primary search, so the subsearch cannot generate the structure for the table command as the timechart has not yet run. The exception to that is the appendpipe subsearch, which runs inline with the primary search, which I gave as an example, however, this subsearch is different in that it is creating new rows so it can't be used to push data into the commands in the existing pipeline. I did figure out how to do the double transpose without knowing the column count | transpose 0 | sort - [ | makeresults earliest=-60m@m latest=@m | timechart fixedrange=t count | stats count as row | eval search="row ".row | format "" "" "" "" "" "" ] | transpose 0 header_field=column | fields - column the earliest/latest may not be needed in the real world, as long as the timechart and time range matches the outer search, it will get the same row count, so the sort will work with correct column name. If you do find another way, please post here - it's an interesting SPL challenge.
index=testindex sourcetype=json source=websource | timechart span=1h count by JobType This is my search query to generate a timechart in Splunk. The 'JobType' field has two different values ... See more...
index=testindex sourcetype=json source=websource | timechart span=1h count by JobType This is my search query to generate a timechart in Splunk. The 'JobType' field has two different values for the field, which are 'Completed' and 'Started'. The timeframe between when a job is Completed and before the next Started event happens, there are no jobs running, so I need to create a new event called 'Not Running' to illustrate when there are no jobs running. However, the time between when a job is Started and a job is Completed needs to be called 'Running' because the time period between these two events is when there are jobs running. I need to visualize these events in a timechart. Example - there is a job that completes on 01/06/2024 at 17:00 (Completed). The next job starts on 01/06/2024 at 20:00 (Started). In this timeframe between 17:00 and 20:00 on 01/06/2024, it is in a state of 'Not Running'. I do not want to capture individual jobs. I want to capture all the jobs. The main values I want to illustrate in the timechart is when there are 'Not Running' and 'Running events so basically I want to illustrate the gaps between the 'Started' and 'Completed' events accordingly. I am stuck with this so it would be awesome if I can get some help for this. Thank you.
OK, so it seems you have a misunderstanding of the concept of null in Splunk. Null in Splunk means no value, invisible, not a field value Empty is a value that has no length What you have is NOT... See more...
OK, so it seems you have a misunderstanding of the concept of null in Splunk. Null in Splunk means no value, invisible, not a field value Empty is a value that has no length What you have is NOT a null field, it is a field with the text string "null" so to remove values of fields you don't want you can simply do either of these | eval ImpCon=mvmap(ImpConReqID,if(isnotnull(ImpConReqID) AND ImpConReqID!="null", ImpConReqID, null())) | eval ImpCon2=mvfilter(ImpConReqID!="null")  
@snosurfur wrote: Stopping splunkd is taking up to 6 minutes to complete.   with the HFs.  'TcpInputProc [65700 tcp] - Waiting for all connections to close before shutting down TcpInputProcess... See more...
@snosurfur wrote: Stopping splunkd is taking up to 6 minutes to complete.   with the HFs.  'TcpInputProc [65700 tcp] - Waiting for all connections to close before shutting down TcpInputProcessor '. Has anyone else experienced something similar post upgrade?   Anything changed on sending (UF/HF) side? HF(receiver) waits for sender to disconnect gracefully before it force terminates connections after waiting for ~110 sec(default).
@pradeepiyer2024, I would approach this from the perspective of either your EDI/X12 gateway or your payer platform. I only have past experience with TriZetto Facets--and it's been a minute--but in g... See more...
@pradeepiyer2024, I would approach this from the perspective of either your EDI/X12 gateway or your payer platform. I only have past experience with TriZetto Facets--and it's been a minute--but in general, the batch jobs themselves should log the output you need. Absent that, your gateway may help you track 834 requests and 999 responses but not failures within the system. I've personally used Splunk Enterprise and Splunk IT Service Intelligence to track X12 transactions end-to-end, but the best fit depends on the software components in your solution and how those components log transactions and store data.
That's not a question about Splunk or its products. It's about how your whole process is organized, what, where and how can you monitor and so on. It might not be related to EDI documents at all - ju... See more...
That's not a question about Splunk or its products. It's about how your whole process is organized, what, where and how can you monitor and so on. It might not be related to EDI documents at all - just monitoring and logging from whatever procass you have making sure that unique identifier of a EDI document is stored. It might be a matter of choosing proper Splunk tools for your process already in place and maybe doing some slight adjustments to it. In such case your local Splunk Partner will happily help you choose right products/services for you (might be Splunk Enterprise, might be Splunk Cloud, might be O11y Cloud, depending on what you have and how it's done). But it might be a bigger consulting project to help you architect the whole process and environment, possibly using Splunk tools. It's beyond the scope of this forum. You can't just throw in "some Splunk" and hope for the best. If your process is OK, you'll probably fit in one of the solutions, if it's not, there are no wonders - GIGO.
@PickleRick  End to end means from the moment the files are picked up, to the point they hit the target database. My intention is not to read or parse the file, instead, to make sure, for example, i... See more...
@PickleRick  End to end means from the moment the files are picked up, to the point they hit the target database. My intention is not to read or parse the file, instead, to make sure, for example, if 10 EDI files were consumed in a batch, all those 10 files make it to the target database. Part of it would be understand how many failed, failed at which stage etc. 
@gcusello  My intention is not to read or parse the file, instead, to make sure, for example, if 10 EDI files were consumed in a batch, all those 10 files make it to the target database. Part of it ... See more...
@gcusello  My intention is not to read or parse the file, instead, to make sure, for example, if 10 EDI files were consumed in a batch, all those 10 files make it to the target database. Part of it would be understand how many failed, failed at which stage etc. 
@gcuselloWell, this ain't that easy While the format itself is relatively easy, it is - to some extent - a structured data. So it's problematic to parse it properly preserving the relationship be... See more...
@gcuselloWell, this ain't that easy While the format itself is relatively easy, it is - to some extent - a structured data. So it's problematic to parse it properly preserving the relationship between entities (there might be - for example - several instances of a "name" or "address" field, each related to another person within the same record. So it's more complicated than it looks. Sure, it can be ingested but it's more complicated to parse it properly to not make a big mess of it. Also @pradeepiyer2024 - what do you mean by "monitor end to end"?
Yes, but it makes no sense to add another layer of processing since you're gonna go through every event anyway. So the best approach here would be to do your basic search | lookup enriching your d... See more...
Yes, but it makes no sense to add another layer of processing since you're gonna go through every event anyway. So the best approach here would be to do your basic search | lookup enriching your data | filter out data not matching your criteria based on lookup values  
https://docs.splunk.com/Documentation/AddonBuilder/4.2.0/UserGuide/ConfigureDataCollection#Add_a_data_input_using_a_REST_API Build the data collection for your add-on to gather data from a REST API... See more...
https://docs.splunk.com/Documentation/AddonBuilder/4.2.0/UserGuide/ConfigureDataCollection#Add_a_data_input_using_a_REST_API Build the data collection for your add-on to gather data from a REST API. A REST data input uses JSON as a data type and supports basic authentication and API-based authentication. For advanced data collection, create a modular input by writing your own Python code. So if your source returns XML.. well, you're on your own here.
You're right, but it'll run every 15 minutes for a limited amount of data, so we can suffer the performance issue
@PickleRick I appreciate your reply. The add-on builder option is what I'll go with. But will the add-on option work with XML data, given the data type is XML and the Splunk documentation only discus... See more...
@PickleRick I appreciate your reply. The add-on builder option is what I'll go with. But will the add-on option work with XML data, given the data type is XML and the Splunk documentation only discusses JSON format? If so, do I need to apply the same "JSON path formats"? If not, can you kindly provide the formats or a reference guide?
Hey, yes I use inputlookup for filtering the results to the logs I want to see by the malware_signature After that I want to enrich the table with the classification field, but using the lookup comm... See more...
Hey, yes I use inputlookup for filtering the results to the logs I want to see by the malware_signature After that I want to enrich the table with the classification field, but using the lookup command it won't catch the malware_signature with the wildcards