All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Morning, Splunkers. I've got a dashboard that gets some of it's input from an external link. The input that comes in determines which system is being displayed by the dashboard with different settin... See more...
Morning, Splunkers. I've got a dashboard that gets some of it's input from an external link. The input that comes in determines which system is being displayed by the dashboard with different settings through a <change> line in each, then shows the necessary information in a line graph. That part is working perfectly, but what I'm trying to do is set the color of the line graph based on the system chosen, and I'm trying to keep is simple for future edits. I've set the colors I'm currently using in the <init> section as follows:   <init> <set token="red">0xFF3333</set> <set token="purple">0x8833FF</set> <set token="green">0x00FF00</set> </init>   The system selection looks like this:   <input token="system" depends="$NotDisplayed$"> <change> <condition value="System-A"> <set token="index_filter">index_A</set> <set token="display_name">System-A</set> <set token="color">$purple$</set> </condition> <condition value="System-B"> <set token="index_filter">index_B</set> <set token="display_name">System-B</set> <set token="color">$green$</set> </condition> <condition value="System-C"> <set token="index_filter">index_C</set> <set token="display_name">System-C</set> <set token="color">$red$</set> </condition> </change> </input>     I now have a single query window putting up a line graph with the necessary information brought in from the eternal link. Like I said above, that part works perfectly, but what DOESN'T work is the color. Here's what my option field currently looks like:   <option name="charting.fieldColors">{"MyField":$color$}</option>     The idea here is if I add future systems, I don't have to keep punching in hex codes for colors, I just enter a color name token. Unfortunately, what ends up happening is the line graph color is black, no matter what color I use. If I take the $color$ token out of the code and put in the hex code directly it works fine. It also works if I put the hex code directly in the system selection instead of the color name token. Is there a trick to having a token reference another token in a dashboard? Or is this one of those "quit being fancy and do it the hard way" type of things? Any help will be appreciated. Running Splunk 8.2.4, in case it matters.
Hello, After upgrading from Classic to Victoria Experience on our Splunk Cloud stack, we have encountered issues retrieving data from AWS SQS-based S3. The inputs remained after the migration, but f... See more...
Hello, After upgrading from Classic to Victoria Experience on our Splunk Cloud stack, we have encountered issues retrieving data from AWS SQS-based S3. The inputs remained after the migration, but for some, it seems the SQS queue name is missing. When we try to configure these inputs, we immediately receive a 404 error in the python.log. Please see the screenshot below for reference. Furthermore, the error message indicates that the SQS queue may not be present in the given region. However, we have confirmed that the queue does exist in the specified region. Has anyone else experienced this issue and can offer assistance? Thank you.
Has anyone noticed the push notifications through the Splunk Mobile app has stopped working recently. We are using Spunk on prem, Splunk Secure Gateway set up with prod.spacebridge.spl.mobi set as t... See more...
Has anyone noticed the push notifications through the Splunk Mobile app has stopped working recently. We are using Spunk on prem, Splunk Secure Gateway set up with prod.spacebridge.spl.mobi set as the Gateway but I noticed the notifications stopped appearing on my home screen of when my iPhone was locked. Other colleagues using different devices are complaining of the same issue.    I can't remember the exact date but it may have been around the 3rd May.   No changes to our config have been made but i'd be interested to know if anyone else is having this issue.
Hi,  we have Splunk (v9.2) in a clustered environment that manages tons of different logs from a complex and varied network. There are a few departments that have a Sophos firewall each, that sends... See more...
Hi,  we have Splunk (v9.2) in a clustered environment that manages tons of different logs from a complex and varied network. There are a few departments that have a Sophos firewall each, that sends logs through syslog (we would have used a UF, but we couldn't because the IT security can't touch those servers). In order to split the inputs based on the source type, we set those Sophos logs to be sent to port 513 of one of our HFs and created an app to parse those through the use of a regex. The goal was to reduce the logs and save license usage. So far, so good... Everything was working as intended... Until... As it turns out, every night, exactly at midnight, the Heavy Forwarder stops the collection from those sources (only those) and nothing is indexed, until someone gives a restart to the splunkd service (which could be potentially never) and gives new life to the collector. Here's the odd part: during the no-collection time, tcpdump shows the reception of syslog data through the port 513, so the firewall never stops sending data to the HF, but no logs are indexed. Only after a restart we can see the logs are indexed again. The Heavy Forwarder at issue sits on top of a Ubuntu 22 LTS minimized server edition. Here are the app configuration files: - inputs.conf [udp:513] sourcetype = syslog no_appending_timestamp = true index = generic_fw   - props.conf [source::udp:513] TRANSFORMS-null = nullQ TRANSFORMS-soph = sophos_q_fw, sophos_w_fw, null_ip   - transforms.conf [sophos_q_fw] REGEX = hostname\sulogd\[\d+\]\:.*action=\"accept\".*initf=\"eth0\".* DEST_KEY = queue FORMAT = indexQueue # [sophos_w_fw] REGEX = hostname\sulogd\[\d+\]\:.*action=\"accept\".*initf=\"eth0\".* DEST_KEY = _MetaData:Index FORMAT = custom_sophos # [null_ip] REGEX = dstip=\"192\.168\.1\.122\" DEST_KEY = queue FORMAT = nullQueue   We didn't see anything out of the ordinary in the pocesses that start at midnight on the HF. At this point we have no clue about what's happening. How can we troubleshoot this situation? Thanks
I'm trying to run personal scripts in Splunk from a dashboard. I want the dashboard to call a script by user input and then output the script to a table. I'm testing the ability with a Python script ... See more...
I'm trying to run personal scripts in Splunk from a dashboard. I want the dashboard to call a script by user input and then output the script to a table. I'm testing the ability with a Python script that calls a PowerShell script, returns the data to the Python script, and then returns the data to the Splunk dashboard. This is what I have so far:  Test_PowerShell.py Python Script:    import splunk.Intersplunk import sys import subprocess results,unused1,unused2 = splunk.Intersplunk.getOrganizedResults() # Define the path to the PowerShell script ps_script_path = "./Test.ps1" # Define the argument to pass to the PowerShell script argument = sys.argv[1] # Execute the PowerShell script with the argument results = subprocess.run(['powershell.exe', '-File', ps_script_path, argument], capture_output=True, text=True) splunk.Intersplunk.outputResults(results)   Page XML:    <form version="1.1" theme="dark"> <label>Compliance TEST</label> <description>TESTING</description> <fieldset submitButton="false" autoRun="false"></fieldset> <row> <panel> <title>Input Panel</title> <input type="text" token="user_input"> <label>User Input:</label> <default>*</default> </input> </panel> </row> <row> <panel> <title>Script Output</title> <table> <search> <query>| script python testps $user_input$ | table field1</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>   Test.ps1 PowerShell Script:    Write-Host $args[0]   commands.conf:   [testps] filename = Test_PowerShell.py streaming=true python.version = python3   default.meta   [commands/testps] access = read : [ * ], write : [ admin ] export = system [scripts/Test_PowerShell.py] access = read : [ * ], write : [ admin ] export = system   The error I'm getting is the following: External search command 'testps' returned error code 1. 
Hello, i wanted to ask if there is a way in Splunk to collect failured Login Data from Users on a Virtual Machine that is hosted with VMware, so that i can see if a user tried to login like 5 times ... See more...
Hello, i wanted to ask if there is a way in Splunk to collect failured Login Data from Users on a Virtual Machine that is hosted with VMware, so that i can see if a user tried to login like 5 times and failed the Login on VM 5 times?  Would be nice to use it for finding out if there is some Kind of Brute Force Attack or something else going on.
Hello Team, We are getting below error, while deploying the java agent. For few min it's coming-up and after sometime the agent is crashing along with the application. It seems some issue while inst... See more...
Hello Team, We are getting below error, while deploying the java agent. For few min it's coming-up and after sometime the agent is crashing along with the application. It seems some issue while instrumentation the class. Below are the logs for you refence. [main] 17 May 2024 11:25:46,397 WARN LightweightThrowable - java.lang.NoSuchMethodException: java.lang.Throwable.getStackTraceElement(int) caught trying to reflect Throwable methods [AD Thread Pool-Global0] 17 May 2024 11:25:48,998 INFO ErrorProcessor - Sending ADDs to register [ApplicationDiagnosticData{key='java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:', name=SQLSyntaxErrorException : OracleDatabaseException, diagnosticType=ERROR, configEntities=null, summary='java.sql.SQLSyntaxErrorException caused by oracle.jdbc.OracleDatabaseException'}] [AD Thread Pool-Global0] 17 May 2024 11:25:48,998 INFO ErrorProcessor - To enable reverse proxy, use the node property or set env/system variables [AD Thread Pool-Global0] 17 May 2024 11:25:49,094 INFO ErrorProcessor - Setting AgentClassLoader as Context ClassLoader [AD Thread Pool-Global0] 17 May 2024 11:25:49,194 INFO ErrorProcessor - Restoring Context ClassLoader to com.singularity.ee.agent.appagent.kernel.classloader.Post19AgentClassLoader@14bf9759 [AD Thread Pool-Global0] 17 May 2024 11:25:49,194 INFO ErrorProcessor - Error Objects registered with controller :{java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:=1873198} [AD Thread Pool-Global0] 17 May 2024 11:25:49,194 INFO ErrorProcessor - Adding entry to errorKeyToUniqueKeyMap [1873198], ErrorKey[cause=[java.sql.SQLSyntaxErrorException, oracle.jdbc.OracleDatabaseException]], java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException: [AD Thread Pool-Global0] 17 May 2024 11:25:49,294 INFO ErrorProcessor - Sending ADDs to register [ApplicationDiagnosticData{key='java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1465592621', name=java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:, diagnosticType=STACK_TRACE, configEntities=[Type:ERROR, id:1873198], summary='java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:'}] [AD Thread Pool-Global0] 17 May 2024 11:25:49,294 INFO ErrorProcessor - To enable reverse proxy, use the node property or set env/system variables [AD Thread Pool-Global0] 17 May 2024 11:25:49,336 INFO ErrorProcessor - Setting AgentClassLoader as Context ClassLoader [AD Thread Pool-Global0] 17 May 2024 11:25:49,396 INFO ErrorProcessor - Restoring Context ClassLoader to com.singularity.ee.agent.appagent.kernel.classloader.Post19AgentClassLoader@14bf9759 [AD Thread Pool-Global0] 17 May 2024 11:25:49,396 INFO ErrorProcessor - Error Objects registered with controller :{java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1465592621=2272870} [AD Thread Pool-Global0] 17 May 2024 11:25:49,396 INFO ErrorProcessor - Adding entry to errorKeyToUniqueKeyMap [2272870], StackTraceErrorKey{hashCode=-1465592621}, java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1465592621 [AD Thread Pool-Global0] 17 May 2024 11:25:56,893 INFO DynamicRulesManager - The config directory /opt/appdyn/javaagent/23.12.0.35361/ver23.12.0.35361/conf/namicggtd52d-onboarding-25-ll7bx--1 is not initialized, not writing /opt/appdyn/javaagent/23.12.0.35361/ver23.12.0.35361/conf/namicggtd52d-onboarding-25-ll7bx--1/bcirules.xml [AD Thread-Metric Reporter0] 17 May 2024 11:26:07,293 INFO MetricSender - To enable reverse proxy, use the node property or set env/system variables [AD Thread Pool-Global1] 17 May 2024 11:26:38,997 INFO ErrorProcessor - Sending ADDs to register [ApplicationDiagnosticData{key='java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1702013436', name=java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:, diagnosticType=STACK_TRACE, configEntities=[Type:ERROR, id:1873198], summary='java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:'}] [AD Thread Pool-Global1] 17 May 2024 11:26:38,997 INFO ErrorProcessor - To enable reverse proxy, use the node property or set env/system variables [AD Thread Pool-Global1] 17 May 2024 11:26:39,093 INFO ErrorProcessor - Setting AgentClassLoader as Context ClassLoader [AD Thread Pool-Global1] 17 May 2024 11:26:39,094 INFO ErrorProcessor - Restoring Context ClassLoader to com.singularity.ee.agent.appagent.kernel.classloader.Post19AgentClassLoader@14bf9759 [AD Thread Pool-Global1] 17 May 2024 11:26:39,094 INFO ErrorProcessor - Error Objects registered with controller :{java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1702013436=2272876} [AD Thread Pool-Global1] 17 May 2024 11:26:39,094 INFO ErrorProcessor - Adding entry to errorKeyToUniqueKeyMap [2272876], StackTraceErrorKey{hashCode=-1702013436}, java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1702013436 Kindly assist. Regards, Amit Singh Bisht
Has anyone attempted to enable all the correlation searches in the "Use Case Library" for enterprise security? There are over 1,000 correlation searches. Will this impact the performance of the Sea... See more...
Has anyone attempted to enable all the correlation searches in the "Use Case Library" for enterprise security? There are over 1,000 correlation searches. Will this impact the performance of the Search Head (SH) and indexer? If I have 1,000 EPS, what hardware resources would be required? Alternatively, what minimum hardware resources are needed to enable all the correlation searches in the use case library? Thank you.
Hi,   We recently changed the tsidxWritingLevel from 1 to 4 for performance and space-saving. Is there any way to check if the above modification has improved the performance and space in our envir... See more...
Hi,   We recently changed the tsidxWritingLevel from 1 to 4 for performance and space-saving. Is there any way to check if the above modification has improved the performance and space in our environment   Thanks
Hi, I'm looking for my next role and wanted to reach out to the community for guidance on where to look for roles that use AppDynamics as I would love to continue working with this amazing technology... See more...
Hi, I'm looking for my next role and wanted to reach out to the community for guidance on where to look for roles that use AppDynamics as I would love to continue working with this amazing technology and helping improve online experiences Thanks Sunil
In a Dashboard Studio: I applied drilldown to one of the standard icons and linked to another dashboard. The goal is to view the linked dashboard upon clicking on the icon, and it works. However, p... See more...
In a Dashboard Studio: I applied drilldown to one of the standard icons and linked to another dashboard. The goal is to view the linked dashboard upon clicking on the icon, and it works. However, people get distracted when they place mouse upon the icon and the export and Full screen icons pump up. Is there a way to disable this default unneeded functionality so nothings pumps up on mouse hovering over an icon ?   @elizabethl_splu 
I am not seeing option to make my dashboard public or shared please guide 
Hi All, When we doing a splunk search in our application (sh_app1), we noticed some fields are duplicated / double up (refer: sample_logs.png) if we do the same search in another application (sh_we... See more...
Hi All, When we doing a splunk search in our application (sh_app1), we noticed some fields are duplicated / double up (refer: sample_logs.png) if we do the same search in another application (sh_welcome_app_ui), we do not see any duplication for the same fields. cid Perf-May06-9-151xxx level INFO node_name aks-application-xxx   SPL being used. index=splunk_idx source= some_source | rex field=log "level=(?<level>.*?)," | rex field=log "\[CID:(?<cid>.*?)\]" | rex field=log "message=(?<msg>.*?)," | rex field=log "elapsed_time_ms=\"(?<elap>.*?)\"" | search msg="\"search pattern\"" | table cid, msg, elap The event count remains same if we search inside that app or any other app, only some fields are  duplicated. We couldn't figure out where the actual issue is.  Can someone help? 
has anyone successfully using Splunk API call /services/saved/searches/SEARCH_NAME(https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#saved.2Fsearches.2F.7Bname.7D) to add a webhoo... See more...
has anyone successfully using Splunk API call /services/saved/searches/SEARCH_NAME(https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#saved.2Fsearches.2F.7Bname.7D) to add a webhook for an existing Splunk report? I added action.webhook=1 , action.webhook.param.url=https://1234.com , and actions=pagerduty,webhook successfully through API but the Splunk UI does not show the webhook on UI (please see screenshot). Anyone has any idea what seem to be the problem?     curl \ --data-urlencode 'action.webhook.param.url=https://1234.com' \ --data-urlencode 'action.webhook=1' \ --data-urlencode 'actions=pagerduty,webhook' \ --data-urlencode 'output_mode=json' \ --header "Authorization: Splunk A_TOKEN_HERE" \ --insecure \ --request 'POST' \ --retry '12' \ --retry-delay '5' \ --silent \ "https://localhost:8089/services/saved/searches/test-12345"      
After installation of Alert Manager Enterprise 3.0.6 in Splunk Cloud, the Start screen never appears and gives error  "JSON replay had no payload value"  10 times.   Q.  Anyone run into this error... See more...
After installation of Alert Manager Enterprise 3.0.6 in Splunk Cloud, the Start screen never appears and gives error  "JSON replay had no payload value"  10 times.   Q.  Anyone run into this error?  
Hello, I'm trying to dynamically set some extractions to save myself time and effort from writing hundreds of extractions. In my orgs IdAM solution, we have hundreds of various user claims. ie)... See more...
Hello, I'm trying to dynamically set some extractions to save myself time and effort from writing hundreds of extractions. In my orgs IdAM solution, we have hundreds of various user claims. ie)  Data={"Claims":{"http://wso2.org/claims/user":"username","http://wso2.org/claims/role":"user_role",...etc} I would like to set up a single extraction that will extract all of these claims. My idea was the following props.conf EXTRACT-nrl_test = MatchAllClaims transforms.conf [MatchAllClaims] FORMAT = user_$1::$2 REGEX = \"http:\/\/wso2.org\/claims\/(\w+)\":\"([^\"]+) MV_ADD = true   I was hoping this would extract the field dynamically, but it did not work. is there a way to accomplish this with one extraction?   Thank you
My splunk web service is cannot recognize my source type in props.conf file when I try to add data. Here is my props.conf file's content: [Test9] TIME_PREFIX=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\s\-\... See more...
My splunk web service is cannot recognize my source type in props.conf file when I try to add data. Here is my props.conf file's content: [Test9] TIME_PREFIX=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\s\-\s\d{5}\s+ TIME_FORMAT = %m/%d/%Y %k:%M MAX_TIMESTAMP_LOOKAHEAD = 15 LINE_BREAKER = ([\r\n]+)\d+\s+\"\$EIT\, SHOULD_LINEMERGE = false TRUNCATE = 99999 my props.conf file path is: C:\Program Files\Splunk\etc\apps\test\local  
Dear Team, Please let me know how to setup Azure Private Link from customer Azure Virtual Network (VNet) to the Splunk cloud (onsite, not in Azure cloud). Thanks.
I am new to Splunk, so my question maybe very basic. I have built a Splunk dashboard using classic option. I have some Statistics Table and Line Chart in there. The drilldown works great if configure... See more...
I am new to Splunk, so my question maybe very basic. I have built a Splunk dashboard using classic option. I have some Statistics Table and Line Chart in there. The drilldown works great if configured as "Link to search" and Auto which opens in the same window. But I want it to open in a new window. When I try to configure for Custom, I see the following screen But it doesn't open the relevant record/log which I am clicking.    Below is the decoded url when I configure drilldown as Auto (when it works)   https://splunk.wellsfargo.net/en-US/app/wf-s-eft/search?q=search index=**** wf_id=*** source="****" <other search condition> | search Dataset="DS1" | rename ToatalProcessTime AS "Processing Time", TotalRecordsSaved AS "Record Saved", WorkFlow AS Integration &earliest=1716004800.000&latest=1716091200&sid=1716232362.2348555_113378B4-9E44-4B5A-BDBA-831A6E059142&display.page.search.mode=fast&dispatch.sample_ratio=1 I have edited the url for privacy: <other search condition>: Extended search condition Below are the search conditions injected by Splunk: search Dataset="DS1" - where DS1 is the dataset which I clicked earliest=1716004800.000&latest=1716091200 - these are the 2 values sent based on the click   How can I pass these values while configuring Custom drilldown to open in a new window.    Thanks in advance! Sid
I have a dbxquery command that queries an Oracle server that has a DATE format value stored in GMT. My SQL converts it to SQL so I can later use strptime into the _time value for timecharting: ... See more...
I have a dbxquery command that queries an Oracle server that has a DATE format value stored in GMT. My SQL converts it to SQL so I can later use strptime into the _time value for timecharting:       SELECT TO_CHAR(INTERVAL_START_TIME, 'YYYY-MM-DD-hh24-mi-ss') as Time FROM ...       Then at the end of my SPL:       ... | eval _time=strptime(TIME,"%Y-%m-%d-%H-%M-%S") | timechart span=1h sum(VALUE) by CATEGORY       On the chart that renders, we see values in GMT (which we want). My USER TIMEZONE is Central Standard, however, and not GMT. When I click (drilldown) a value $click.value$, it passes the epoch time CONVERTED TO CST. As an example, if I click the bar chart that is for 2PM today, my click-action parm is 1715972400.000 which is Friday, May 17, 2024 7:00:00 PM GMT - 5 hours ahead. I validated this by changing my user tz to GMT and it passes in the epoch time in GMT. I googled 'splunk timezone' and haven't found anything, yet, that addresses this specifically (did find this thread that is related, but no solution https://community.splunk.com/t5/Dashboards-Visualizations/Drill-down-changes-timezones/m-p/95599) So wanted to ask here! It's an issue because the drilldown also relies on dbxquery data, and so my current attack plan is to deal with the incorrect time on the drilldown (in SQL), but I can only support that if all users are in the same timezone. In conclusion, what would be nice is if I could tell Splunk to 'not change the epoch time' when clicked. I think!