All Topics

Top

All Topics

Hello everyone Please assist me in solving the problem below. I'm attempting to determine how to track it in Splunk if a field's place changes in logs. Is SPL tracing in SPLUNK possible? Ex: ... See more...
Hello everyone Please assist me in solving the problem below. I'm attempting to determine how to track it in Splunk if a field's place changes in logs. Is SPL tracing in SPLUNK possible? Ex: Logs : when we onboard the logs in splunk on the below positions. if it changed to then How to trace it by SPL? Please guide me   
Hello Splunkers!! I am facing an issue while running below search. As you can see in the screenshot. Can anyone help me to fix this issue.   search query : | makeresults | addinfo | eval ear... See more...
Hello Splunkers!! I am facing an issue while running below search. As you can see in the screenshot. Can anyone help me to fix this issue.   search query : | makeresults | addinfo | eval earliest=max(trunc(info_min_time),info_min_time),latest=min(max(trunc(info_max_time),info_max_time+0),2000000000) | map search="search `indextime`>=`bin($earliest$,300)` `indextime`<`bin($earliest$,300,+300)` earliest=`bin($earliest$,300,-10800)` latest=`bin($latest$,300,+300)``" | where false() Screenshot for a query error:    
Hi Team, I have created a federated provider and test connection successful . what will be our next steps ? is federated index mandatory to create ? if yes all the indexes across SHs should be cre... See more...
Hi Team, I have created a federated provider and test connection successful . what will be our next steps ? is federated index mandatory to create ? if yes all the indexes across SHs should be created ?'
I'm trying to set up a logarithmic scale on my y-axis, and couldn't find anything that's relevant - the XML syntax doesn't match the dashboard editor, and I'm a bit confused.   I tried doing this, ... See more...
I'm trying to set up a logarithmic scale on my y-axis, and couldn't find anything that's relevant - the XML syntax doesn't match the dashboard editor, and I'm a bit confused.   I tried doing this, ripping the line from the properties of an identical search I ran with a logarithmic y-axis, but I'm not getting results.   Any help would be appreciated; thanks!
I am unaware of how to filter or disable the processing of ANSI escape codes as recommended by Splunk, due to the recently announced log injection vulnerability. We have a clustered environment runni... See more...
I am unaware of how to filter or disable the processing of ANSI escape codes as recommended by Splunk, due to the recently announced log injection vulnerability. We have a clustered environment running 9.0.5 on AWS EC2 instances running Linux. How can I implement these recommendations?
Hello, I have a bar chart that has some dates in the legends. I need to add another value together at this name. How can I do that?    
Hi, we have a distributed Splunk environment and I have successfully deployed the UF to Windows Server. I am getting data into my Indexer.  My question is regarding the Search Head cluster. I believ... See more...
Hi, we have a distributed Splunk environment and I have successfully deployed the UF to Windows Server. I am getting data into my Indexer.  My question is regarding the Search Head cluster. I believe I need to deploy the TA also to the Search Heads to get properties from the props.conf and other files. Do I need to deploy the complete TA to my Search heads or just specific files? Thanks in advance Alex 
Hello everyone, I am trying to SUM the columns.  index="nzc-neel-uttar" source="http:kyhkp" | timechart span=1d count by Type | eval "New_Date"=strftime(_time,"%Y-%m-%d") _time Type-A ... See more...
Hello everyone, I am trying to SUM the columns.  index="nzc-neel-uttar" source="http:kyhkp" | timechart span=1d count by Type | eval "New_Date"=strftime(_time,"%Y-%m-%d") _time Type-A Type-B New_Date 20/07/2023 3 8 20/07/2023 21/07/2023 4 23 21/07/2023 22/07/2023 66 0 22/07/2023 23/07/2023 90 0 23/07/2023 24/07/2023 0 6 24/07/2023 25/07/2023 0 23 25/07/2023   Desired Output: New_Date Type-A Type-B Total 20/07/2023 3 8 11 21/07/2023 4 23 27 22/07/2023 66 0 66 23/07/2023 90 0 90 24/07/2023 0 6 6 25/07/2023 0 23 23   Please suggest Thanks
In Splunk SaaS Cloud, how would I get daily data ingest volume by indexes?
Hi There, I am currently trying to set up specific events to be sent to a separate index. The documentation on how to do this was quite confusing for me. I assume I am making a very obvious mistak... See more...
Hi There, I am currently trying to set up specific events to be sent to a separate index. The documentation on how to do this was quite confusing for me. I assume I am making a very obvious mistake. I can provide any necessary information, Any help would be appreciated, Jamie
I'm trying to create something that displays long term outages: any index that hasn't had traffic in the last hour. I've made heartbeat alerts that notify when outages occur, but they're limited to ... See more...
I'm trying to create something that displays long term outages: any index that hasn't had traffic in the last hour. I've made heartbeat alerts that notify when outages occur, but they're limited to an hour to save resources. After that hour, they drop off the face of the earth and aren't accounted for - this is okay for alerts, but not for a dashboard, where persistence is the goal. I'm trying to create a search that returns the names of any index that's had 0 logs in the last hour. I have this so far: | tstats count where [| inputlookup monitoringSources.csv | fields index | format] earliest=-1h latest=now() by index | where count=0 However, I know this doesn't work, as I have a dummy index name in that .csv file that doesn't exist. If I'm not mistaken, it should be returning the dummy index with a count of 0 (it does not). How could I do this without inflating the search time range past an hour?
Please share if any one have idea of severity (Warn/Critical) and violation status variable while Http rest API integration
Hi Splunkers, I need to show to some stakeholders the correlation searches that we have enabled and are aligned to the mitre att&ck framework. I've tried using the REST command and I can find a... See more...
Hi Splunkers, I need to show to some stakeholders the correlation searches that we have enabled and are aligned to the mitre att&ck framework. I've tried using the REST command and I can find all the annotations under "action.correlationsearch.annotations" field  but I would like to narrow it down to only mitre att&ck. Anyone knows how to get this search? 
Hi  Has anyone manage to replicate the depends functionality for showing/hiding a panel from the classic XML to the new Splunk Dashboarding studio My goal: My goal is to click on a single value... See more...
Hi  Has anyone manage to replicate the depends functionality for showing/hiding a panel from the classic XML to the new Splunk Dashboarding studio My goal: My goal is to click on a single value visualization on dashboard studio and set a token with the click which will then make another panel appear below it. This will then change if i click on another single value, changing the token value and displaying a different visualization below   this is the functionality im speaking about: The click on the single view should set the token  $tokenfordisplay$ Then on the view that should appear and disappear <panel depends="$tokenfordisplay$"> Thanks for any help 
Hi,  Against my corporate account I want to enable webhook action to get all responses against a query in my Java API which I want to consume furthur.  I want to know if splunk enterprise webhook w... See more...
Hi,  Against my corporate account I want to enable webhook action to get all responses against a query in my Java API which I want to consume furthur.  I want to know if splunk enterprise webhook will be a correct approach for it ? Also If I configure my API URL in splunk webhook alert, will I immediately getting payload from splunk or will i need to add the URL in allow list ?   Also, will I get complete payload against the query in my response or just certain fields ?
I have taken over a project from 2 colleagues to install and integrate VectraAI and Splunk. We have a Vectra X29 as Brain/Sensor running Cognito Detect 7.0.2. I have got the Vectra part up and ru... See more...
I have taken over a project from 2 colleagues to install and integrate VectraAI and Splunk. We have a Vectra X29 as Brain/Sensor running Cognito Detect 7.0.2. I have got the Vectra part up and running but have problems with getting data to Splunk. From Splunk representative I was recommended to use SC4S instead of sending the syslog data directly to Splunk which runs on W2019 Server platform (cannot install syslog-ng). SC4S runs on a CentOS Stream8 Server in a Podman Container. Now, for the Vectra specific part: 1) Should I use Cognito Stream to send syslog to SC4S and if yes in syslog or JSON (some documentation recommends this with Universal Forwarder for Splunk). JSON doesn’t seem to work as it is now. I have configured HEC forwarding from SC4S to Splunk as recommended by documentation. 2) Should I use Notifications=>Syslog to send syslog to SC4S and if yes in syslog or JSON? 3) Can I send directly to Splunk’s Vectra Stream App?   Both 1 and 2 seem to work for SC4S but there I bump into problems. Not sure what the problem is there. HEC forwarding from SC4S to Splunk is coming live as it should with correct setup and it forwards Vectra data (nothing else collected by SC4S) to Splunk or maybe it doesn't since I see in Splunk drop Events.   I have configured a filter for Vectra in /opt/sc4s/env_file : SC4S_LISTEN_VECTRA_NETWORKS_X_SERIES_TCP_PORT=9101 which should identify the data as Vectra originated but I’m not sure SC4S handles it correctly. Lack documentation on how to troubleshoot indexed data in SC4S plus how correctly configure the /opt/sc4s/env_file and any other files needed. Have configured all Indexes according the SC4S documentation.   In Splunk I can see incoming Events with action=drop 26/07/2023      - - syslog-ng 155 – [meta sequenceId=”16928”]http: handled by response_action; action=’drop’, url=’htps://x.x.x.x:8088/services/collector/event’, status_code=’400’, driver=’d_hec_fmxt#0’, location=’root generator dest_hec:5:5’ 12:19:03:144    Host = abcdlog2 | source = sc4s | sourcetype = sc4s:events 26/07/2023      - - syslog-ng 155 – [meta sequenceId=”16929”]Message(s) dropped while sending message to destination; driver=’d_hec_fmt#0’, worker_index=7’, time_reopen=’10’, batch_size=’1’ 12:19:03:144    Host = abcdlog2 | source = sc4s | sourcetype = sc4s:events Any advice would be appreciated.   Timo Krjukoff
Hi Team,  Is it possible to update the studio dashboard in the cloud instance in Realtime as and when the event comes to Splunk from s3 bucket (I am using SQS-based S3 inputs for the Splunk Add-on f... See more...
Hi Team,  Is it possible to update the studio dashboard in the cloud instance in Realtime as and when the event comes to Splunk from s3 bucket (I am using SQS-based S3 inputs for the Splunk Add-on for AWS) without refreshing the dashboard.  I would like to display an image like on and off in the dashboard based on the events coming in real time without refreshing it like 5 secs once or 10 secs once...  Thanks
what's the fastest way to import into KVStore? I have about 650 000 rows and import is slow over "Lookup File Editig" app. Import last Approx. few hours. Is there any faster way to import into K... See more...
what's the fastest way to import into KVStore? I have about 650 000 rows and import is slow over "Lookup File Editig" app. Import last Approx. few hours. Is there any faster way to import into KVStore. Or do I create new dummy index and then I import into index (will that be faster)
In the below graph i see values displayed on top of each bar. How do i remove them?