All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

is there a condition or command for manually refreshing dashboard? so whenever i click on refresh button of  dashboard it refreshes, but i want  whenever i refresh dashboard , i want to set a particu... See more...
is there a condition or command for manually refreshing dashboard? so whenever i click on refresh button of  dashboard it refreshes, but i want  whenever i refresh dashboard , i want to set a particular token value to something. is that possible
Try this: index=testindex sourcetype=json source=websource | transaction "Transaction.ID" | chart values(duration) over _time
Hi @JPR  the default "user" role has access to the fields menu. Check the capabilities assigned to this role compared to the role you have created. | rest servicesNS/-/-/authorization/roles/user... See more...
Hi @JPR  the default "user" role has access to the fields menu. Check the capabilities assigned to this role compared to the role you have created. | rest servicesNS/-/-/authorization/roles/user | fields title capabilities
is there a condition for refreshing a dashboard. like if(dashboard refresh , 0 ,1)  
Hi experts, I am going through installation and set up of Splunk App for Data Science and Deep Learning. Have come across mention of minimum requirement mentioned for transformer GPU container at: ... See more...
Hi experts, I am going through installation and set up of Splunk App for Data Science and Deep Learning. Have come across mention of minimum requirement mentioned for transformer GPU container at: https://docs.splunk.com/Documentation/DSDL/5.1.2/User/TextClassAssistant What are the minimum requirements for CPU only Docker host machine in general when using this tool kit?   Thanks, MCW  
@KendallW  That's right. There are multiple transactions and each transaction has a transactionID. Each transaction can have a job type which can be either 'Completed' or 'Started'.
Hi @AZ1 Try this -Try clearing browser cache -Check your browser is up to date -Try a different browser -Try updating Splunk
Hi @thangs4 , From your second screenshot it doesn't look like the events are being parsed correctly. It looks like there wasn't a clean break between the events, and a timestamp wasn't extracted ... See more...
Hi @thangs4 , From your second screenshot it doesn't look like the events are being parsed correctly. It looks like there wasn't a clean break between the events, and a timestamp wasn't extracted from the first event.  Try using these settings in props.conf on your indexer/HF to explicitly break events before/after the <Event> and </Event> tags: KV_MODE=xml TRUNCATE = 0 SHOULD_LINEMERGE = false LINE_BREAKER=([\r\n]+)\<Event\sxmlns TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%9QZ TIME_PREFIX=<TimeCreated SystemTime=' MUST_BREAK_AFTER = \<\/Event\> NO_BINARY_CHECK=true CHARSET=AUTO disabled=false
Hi everyone, I have a problem with the line-break in Splunk. I have tried following the methods as in other posts.  Here is my props.conf [test1:sec] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)... See more...
Hi everyone, I have a problem with the line-break in Splunk. I have tried following the methods as in other posts.  Here is my props.conf [test1:sec] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=AUTO disabled=false TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%9QZ TIME_PREFIX=<TimeCreated SystemTime=' when I applied this sourcetype in raw windows, it work. but after I finished, it was one event raw windows #line-break  
Hi @din98  the 'duration' field from the transaction command sounds like what you're looking for. Do the jobs each have a unique ID field you could run this on?  | transaction <ID>  
Assuming request.path is a field name, you are looking for  | eval action=case(like('request.path',"auth/ldap/login/names"),"success")  
As I said, it only means that you didn't set up wildcard matching correctly.  Check your lookup setup.
Hi Team, How to write a calculated field for below  | eval action=case(like("request.path","auth/ldap/login/names"),"success") Names field will be changeing Above one is not working
That's interesting and a little bit odd, because the docs don't really make it clear that these are available to the running search because if you take the metadata tokens and make a search like this... See more...
That's interesting and a little bit odd, because the docs don't really make it clear that these are available to the running search because if you take the metadata tokens and make a search like this | makeresults | fields - _time | eval Name_Of_Search="$name$" | eval a="$action.email.hostname$" | eval b="$action.email.priority$" | eval c="$alert.expires$" | eval d="$alert.severity$" | eval e="$app$" | eval f="$cron_schedule$" | eval g="$description$" | eval h="$name$" | eval i="$next_scheduled_time$" | eval j="$owner$" | eval k="$results_link$" | eval m="$trigger_date$" | eval n="$trigger_time$" | eval o="$type$" | eval p="$view_link$" | transpose 0 and schedule the search, the output is this column row 1 Name_Of_Search SS_TEST a $action.email.hostname$ b $action.email.priority$ c 24h d 3 e $app$ f * * * * * g Saved Search Description h SS_TEST i $next_scheduled_time$ j $owner$ k $results_link$ m $trigger_date$ n $trigger_time$ o $type$ p $view_link$ indicating that not all of these metadata tokens are available ($search$ does work but breaks the running search do I didn't include it) Still my use case was always getting hold of name.
Is the current Symantec Bluecoat TA (v3.8.1) compatible with SGOS v7.3.16.2?  Has anyone got this to work and provide some insight? After our proxies admins upgraded from 6.7.x to 7.x all the field... See more...
Is the current Symantec Bluecoat TA (v3.8.1) compatible with SGOS v7.3.16.2?  Has anyone got this to work and provide some insight? After our proxies admins upgraded from 6.7.x to 7.x all the field extractions have ceased to work.   The release notes says it compatible up to 7.3.6.1.  Is their an updated TA that we are not aware of? https://docs.splunk.com/Documentation/AddOns/released/BlueCoatProxySG/Releasenotes Thanks.
Number 2 does not do sorting in the table itself, that is simply used as the base search in the dashboard to drive the sorting of the visualisation panels, which is what I understood you wanted to do... See more...
Number 2 does not do sorting in the table itself, that is simply used as the base search in the dashboard to drive the sorting of the visualisation panels, which is what I understood you wanted to do.  There is no practical column limit to the prefix solution, you just need to make the prefix fit the requirement, i.e. change the | eval name=... to | eval name=printf("_%02d_%s", c, column) and you will have a sortable 01_xxx 02_yyy syntax. As for a subsearch, the problem you face is that generally a subsearch runs BEFORE the primary search, so the subsearch cannot generate the structure for the table command as the timechart has not yet run. The exception to that is the appendpipe subsearch, which runs inline with the primary search, which I gave as an example, however, this subsearch is different in that it is creating new rows so it can't be used to push data into the commands in the existing pipeline. I did figure out how to do the double transpose without knowing the column count | transpose 0 | sort - [ | makeresults earliest=-60m@m latest=@m | timechart fixedrange=t count | stats count as row | eval search="row ".row | format "" "" "" "" "" "" ] | transpose 0 header_field=column | fields - column the earliest/latest may not be needed in the real world, as long as the timechart and time range matches the outer search, it will get the same row count, so the sort will work with correct column name. If you do find another way, please post here - it's an interesting SPL challenge.
index=testindex sourcetype=json source=websource | timechart span=1h count by JobType This is my search query to generate a timechart in Splunk. The 'JobType' field has two different values ... See more...
index=testindex sourcetype=json source=websource | timechart span=1h count by JobType This is my search query to generate a timechart in Splunk. The 'JobType' field has two different values for the field, which are 'Completed' and 'Started'. The timeframe between when a job is Completed and before the next Started event happens, there are no jobs running, so I need to create a new event called 'Not Running' to illustrate when there are no jobs running. However, the time between when a job is Started and a job is Completed needs to be called 'Running' because the time period between these two events is when there are jobs running. I need to visualize these events in a timechart. Example - there is a job that completes on 01/06/2024 at 17:00 (Completed). The next job starts on 01/06/2024 at 20:00 (Started). In this timeframe between 17:00 and 20:00 on 01/06/2024, it is in a state of 'Not Running'. I do not want to capture individual jobs. I want to capture all the jobs. The main values I want to illustrate in the timechart is when there are 'Not Running' and 'Running events so basically I want to illustrate the gaps between the 'Started' and 'Completed' events accordingly. I am stuck with this so it would be awesome if I can get some help for this. Thank you.
OK, so it seems you have a misunderstanding of the concept of null in Splunk. Null in Splunk means no value, invisible, not a field value Empty is a value that has no length What you have is NOT... See more...
OK, so it seems you have a misunderstanding of the concept of null in Splunk. Null in Splunk means no value, invisible, not a field value Empty is a value that has no length What you have is NOT a null field, it is a field with the text string "null" so to remove values of fields you don't want you can simply do either of these | eval ImpCon=mvmap(ImpConReqID,if(isnotnull(ImpConReqID) AND ImpConReqID!="null", ImpConReqID, null())) | eval ImpCon2=mvfilter(ImpConReqID!="null")  
@snosurfur wrote: Stopping splunkd is taking up to 6 minutes to complete.   with the HFs.  'TcpInputProc [65700 tcp] - Waiting for all connections to close before shutting down TcpInputProcess... See more...
@snosurfur wrote: Stopping splunkd is taking up to 6 minutes to complete.   with the HFs.  'TcpInputProc [65700 tcp] - Waiting for all connections to close before shutting down TcpInputProcessor '. Has anyone else experienced something similar post upgrade?   Anything changed on sending (UF/HF) side? HF(receiver) waits for sender to disconnect gracefully before it force terminates connections after waiting for ~110 sec(default).
@pradeepiyer2024, I would approach this from the perspective of either your EDI/X12 gateway or your payer platform. I only have past experience with TriZetto Facets--and it's been a minute--but in g... See more...
@pradeepiyer2024, I would approach this from the perspective of either your EDI/X12 gateway or your payer platform. I only have past experience with TriZetto Facets--and it's been a minute--but in general, the batch jobs themselves should log the output you need. Absent that, your gateway may help you track 834 requests and 999 responses but not failures within the system. I've personally used Splunk Enterprise and Splunk IT Service Intelligence to track X12 transactions end-to-end, but the best fit depends on the software components in your solution and how those components log transactions and store data.