All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as Com... See more...
I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 10     Mainframe-     index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 10     I am trying to append both the tables into one, using something like this -     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 50 | append [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 50]      The issue is I am trying to add a column named "Log_Source" at the start which tells either Distributed or Mainframe for its corresponding result. I am not sure how to achieve it. Pls help.
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now... See more...
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now we want to redirect the users after successfull authentication to another url the only way forward I could find was via the Relaystate parameter below are all the combinations I tried for the Single Sign On (SSO) URL: https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?SAML_Request=<base64_SAML_Auth_Request>&RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https://redirectWebsite.com I keep getting the error error while parsing relaystate. failed to decode relaystate. any advise on how to embed the relaystate in the SSO
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes ... See more...
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes through the pipeline but sends it to the main instead of the index:   This is the spl2 of the pipeline: /* A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination". */ $pipeline = | from $source | eval index = if(isnull(index), "usa_windows", index) | into $destination;
@ITWhisperer, yes im trying to set a token based on the value has_runtime. Since i want to show some charts only if that particular data is present.  For this i am trying to create a token so that i ... See more...
@ITWhisperer, yes im trying to set a token based on the value has_runtime. Since i want to show some charts only if that particular data is present.  For this i am trying to create a token so that i can use this to show or hide the charts.
Hello,   I have been receiving the events without format and I have installed the addon in the HF and in cloud.
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _in... See more...
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _indextime  and  _time  but I would also like the seconds difference between GenerationTime (ie...2024-04-23 12:49:52)    and _indextime. index=splunk_index  sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=_indextime-_time | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference 
In short there will be a number of installs and configurations you will have to perform, follow this for Splunk Strem components. https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamAp... See more...
In short there will be a number of installs and configurations you will have to perform, follow this for Splunk Strem components. https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/AboutSplunkStream High Level Steps 1.Design your architect and the data flow - ensure all ports required can communicate 2. Ensure your device can send netflow data 3. Configure a Splunk index for the netflow data in Splunk 4. Configure Splunk HEC in Splunk and set it to the netflow index 5. Install and Configure the Splunk netflow componenets (ISF) this is where you would point the device to send its netflow data, which then sends to Splunk HEC. Follow this for Splunk HEC configuration https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector Flow this for your architecture - there is a good diagram that you need to consider for all the componenets https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/DeploymentArchitecture Example of stream in action https://lantern.splunk.com/Data_Descriptors/DNS_data/Installing_and_configuring_Splunk_Stream
yes all the indexers are up and running fine  So what will be the possible solution in order to mitigate the above issue?
@ITWhisperer  Thank you for your assistance. I've successfully tested the query you provided and managed to retrieve the expected success and failure counts. However, in addition to this, I also nee... See more...
@ITWhisperer  Thank you for your assistance. I've successfully tested the query you provided and managed to retrieve the expected success and failure counts. However, in addition to this, I also need to retrieve the step count for each step from Query 1. For instance, when I ran Query 1 individually for the last 30 days, I observed that 20 steps were extracted as a field along with the count of each step. Similarly, when I ran Query 2 for the same period, approximately 10 successes were extracted as a field with their respective counts. Likewise, when I ran Query 3, around 18 failures were extracted as a field with their counts. So, with the combined search you provided, I'm able to obtain a total of 18 fields comprising both successes and failures. This is because if any of the step fields have either a success or a failure, it reflects in the query output. However, the other two step fields don't have successes or failures, and their step information is not present in the output of the provided query. Therefore, we need to include the output of the first query, which includes the step field along with its count, in addition to the success and failure counts obtained from Queries 2 and 3. Since Query 1 captures all events, Query 2 captures successes, and Query 3 captures failures, we need to ensure that the first query's output is also included in the combined search. Could you please review and update accordingly? Thank you for your attention to this matter.
In the done handler, you only have access to the first row of the results, so you would only be able to set a token based on the first result. Is this what you are actually trying to do?
And each of your indexers is in different site? And all your indexers are online and OK? (BTW, this seems a bit overly complicated; unless you're planning on some further expansion of your environme... See more...
And each of your indexers is in different site? And all your indexers are online and OK? (BTW, this seems a bit overly complicated; unless you're planning on some further expansion of your environment, you could perfectly well get away with a single-site setup).
yes...
We have total 3 indexers and below is the site configuration Replication Factor ? origin:1, site1:1, site2:1, site3:1, total:3 Search Factor ? origin:1, total:2 Can you please help me with tha... See more...
We have total 3 indexers and below is the site configuration Replication Factor ? origin:1, site1:1, site2:1, site3:1, total:3 Search Factor ? origin:1, total:2 Can you please help me with that Thanks 
This appears to be a bug - I would log a support call for this. (You may need to uprade) 
thank  @deepakc   for your reply. on our deployment, we are using UF on our rsyslog box and every data source is sent to ryslog server on a specific file, then we use "Monitor Files & Directories" a... See more...
thank  @deepakc   for your reply. on our deployment, we are using UF on our rsyslog box and every data source is sent to ryslog server on a specific file, then we use "Monitor Files & Directories" as data input. as you mentioned "F5 is not switching it might be due to the continuous stream of syslog data being sent" -i believe the solution to LB issue is to increase the size of these files on UF itself so in this scenario the second  UF will work only if first one  down because of continuous stream of syslog data -other suggestion,  can we configure our LB to achieve the below challenge: if the first Universal Forwarder becomes overwhelmed by the continuous stream of syslog data, another UF can take over and handle the load. please advise with the best practice in this scenario.
It depends on your architecture and your config. How many indexers you have and in what sites are they? What is your siteRF/siteSF configuration?
As @gcusello already pointed out, the Universal Forwarder by default has a limit on data throughput so if you have too many events coming in, the UF might not keep up with sending them out sufficient... See more...
As @gcusello already pointed out, the Universal Forwarder by default has a limit on data throughput so if you have too many events coming in, the UF might not keep up with sending them out sufficiently quickly (the same can happen if your network bandwidth is too low). First question though is where the latency appears - look into Forwarded Events log on your WEC machine and verify if those events you see are current or delayed - that's first hint where to troubleshoot it. There are also two different modes of how WEF operates - in push mode the source machines send the events to the WEC machine but in the pull mode the WEC machine actively pulls the events from the source machines with given schedule (I'm not sure if push mode is continuous only or does it work with scheduled periods as well). That's something you should discuss with your Windows admins. (I suppose there can be also different factors possibly causing WEF delays). Another thing that shows when you exceed given performance level is that WinEventLog sources seem to get capped at some point and you can't go over some performance level using single input (even though the machine itself is perfectly capable of handling additional load). In such case the solution is to create additional EventLog channels beside the "normal" Forwarded Events and split the events from subscriptions into multiple channels (and of course ingest them with UF from those channels). But that's a relatively advanced topic (on the Windows side).
Hi, Could you please let me know how to resolve the issue   Thanks
The message says what your cluster is missing - an indexer located in site3 to which the bucket could be replicated.
Ok the further we go down this thread, the more confusing it gets. You contradict yourself. In one place you say that it's a standalone search-head then in another you say that it's a part of a clust... See more...
Ok the further we go down this thread, the more confusing it gets. You contradict yourself. In one place you say that it's a standalone search-head then in another you say that it's a part of a cluster. So there are two possible scenarios: 1) It is indeed one of the search-heads in a cluster, managed by deployer but you manually installed an app on just one of those search-heads. That still doesn't make the server a stand-alone search-head. 2) It is a stand-alone search-head (not being a part of a search-head cluster). It is _not_ managed by a deployer. It _might_ be managed by deployment server. But might as well be managed by something external. So which one is it? Also I should expect the threatq support to tell you it's not their problem because it has nothing to do with the app itself - it's about your Splunk environment.