All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am using dashboard studio on Splunk Cloud - 8.2.2203.2 where I have a base search and 2 chained searches that reference the base search. The base search is using the Global Time Range (glo... See more...
Hello, I am using dashboard studio on Splunk Cloud - 8.2.2203.2 where I have a base search and 2 chained searches that reference the base search. The base search is using the Global Time Range (global_time) as a time range input when searching. The chain searches should also inherent the same value that the base search is getting from global_time as shown below.   "Time Range Currently using Global Time Range input $global_time.earliest$ - $global_time.latest$"   However, when I am changing the time input, the panel that is using one of the chain search does not load automatically and would only work if I refresh the entire page. In addition, when I click on the magnifying glass (Open in search) for the panel, it takes me to a search page but does not return any results because of the error "Invalid earliest_time". I then manually select "Last 24 hours" for the time range in the search query drop down button and that resolve the error and returned results. This tells me that the search query itself is good but there may have been issue with the time range value not being passed from the base search to the chain search. If my panel is referencing a base search directly, the time range value works perfectly, the dashboard re-search when I change the time, and have no error when I click "Open in Search".   I also noted that in the URL after I click "Open in Search" for the panel that is using a chain search, it had this in the URL: "earliest=%24global_time.earliest%24&latest=%24global_time.latest%24". This tells me that the value that global_time was holding did not get pass onto the chain search. I confirmed this by manually selecting the "Last 24 hours" for the time range in the search query drop down button and noted this in the URL: "earliest=-24h%40h&latest=now", something along this line should have been in the URL when I click "Open in Search" instead of variable name.    Can someone please help to see if this is a bug or is there something special that needs to be configured for a chain search to inherent value from a time range token?   Thank you
I would like to have a report emailed to me a few minutes after an alert goes off.  While the alert can include the results, it is based on something specific and will not have all the information I ... See more...
I would like to have a report emailed to me a few minutes after an alert goes off.  While the alert can include the results, it is based on something specific and will not have all the information I need.  Let's say the alert is set up to catch too many host communication  errors to a specific endpoint.  Errors>100.  Currently I either go to the alert and alter it to make a time chart to see any trends, or go to a specific dashboard that shows communication errors with other endpoints, network status, response times, etc.  When the problem goes away I take all the Splunk graphs and make an incident report.   I would like to have a report with graphs and other info based on the dashboard emailed to me at the time of the alert and 10 minutes after.   Sometimes I can get to my email, but not to Splunk.   This would also help with the incident report and make them more uniform.  Is this possible?  I have not worked with reports much.  Can a report be triggered by a separate search?  I could not find that answer online so I believe it can't.  I could write a query that looks at the last time an alert went off and have that trigger the associated report if possible.  I would like some type of PDF that I can just attach to the incident report.  More importantly I would like to have much more detail emailed to me after an alert.  I'm not even sure what an emailed report looks like.  I could google that, but If I can't trigger it there is no need for the report.  Although in reading about reports I want to use them more with dashboards.  Thanks         
I have 3 filters for servers like this: (the tokens from these filters are used in the query) Server1 : Bridge_API, Bridge_UAT, Bridge_UAT_API Server2:  PG_API, PG_UAT, PG_UAT_API Server 3:  P... See more...
I have 3 filters for servers like this: (the tokens from these filters are used in the query) Server1 : Bridge_API, Bridge_UAT, Bridge_UAT_API Server2:  PG_API, PG_UAT, PG_UAT_API Server 3:  PA_API, PA_UAT, PA_UAT_API When I select a server type from any of the dropdown for e.g. if I select Bridge_API from Server1 dropdown, the other filters should switch to *_API and query the data. (if I select a server from the Server 2, the corresponding suffix server should be updated) Similarly for Bridge_UAT others should switch to PG_UAT and PA_UAT. How can I achieve this?
I am trying to use a colon ( : ) in my js file; however, I do not see results when I use the colon.  I verified that the command works with the colon when I run it within a Search window.   I also ha... See more...
I am trying to use a colon ( : ) in my js file; however, I do not see results when I use the colon.  I verified that the command works with the colon when I run it within a Search window.   I also have it working without the colon in the js file.  I just can't seem to use the colon in the js file.  The following code in my js file does not work.   ... | search (path IN (\"*:\\windows\\*\")) | stats count     The following code in my js file works.   ... | search (path IN (\"*\\windows\\*\")) | stats count     I tried to escape it like I did the double-quotes, but that did not work.  Is there a way to use the colon in the js file?   Thanks 
Hi All, I need your help to get list of all field names in a dropdown filter from SPL results at runtime.  Description: - I have a SPL in panel section of the dashboard. I need to get the column ... See more...
Hi All, I need your help to get list of all field names in a dropdown filter from SPL results at runtime.  Description: - I have a SPL in panel section of the dashboard. I need to get the column names of the results dynamically loaded to a dropdown list in the same dashboard.  I tried searching over about it and found a similar post below: - https://community.splunk.com/t5/Dashboards-Visualizations/How-to-create-a-dropdown-search-on-columns-of-data-which-aren-t/m-p/165658/highlight/true  However, it tells about using a tag <populatingSearch>. When I use the above tag, I get a warning: - Legacy notation: populatingSearch. Thus, I need your help to build the same. Thank you.
Hello, I've recently upgrade from Splunk 7.0 to Splunk 9.0. One of the things that ended up breaking is the Splunk Add-on for Tenable (5.1.4). I knew it was going to stop working due to compatibili... See more...
Hello, I've recently upgrade from Splunk 7.0 to Splunk 9.0. One of the things that ended up breaking is the Splunk Add-on for Tenable (5.1.4). I knew it was going to stop working due to compatibility issues and that's fine since we really needed to upgrade Splunk. Is there any other way for our Splunk environment to receive Nessus data? We currently have Nessus Professional Version 10 and it does not seem to work with the Tenable Add-on for Splunk.  Thanks, Grant
Greetings, I have a dashboard with 2 panels. The first panel uses a simple input for userid to fuel the search.   index=foo sourcetype=bar $userid$ | table session   This will return a varying ... See more...
Greetings, I have a dashboard with 2 panels. The first panel uses a simple input for userid to fuel the search.   index=foo sourcetype=bar $userid$ | table session   This will return a varying number of session results depending on the time period specified. I want to take all the returned values and feed into a second panel search to show how many time a specific event occur for each session.   index=foo sourcetype=bar eventtype=specific $sessionid$ | stats count AS Total by session   I populate the token $sessionid$ with the following xml at the end of the first panel:   <finalized> <condition match=" 'job.resultCount' != 0"> <set token="sessionid">$result.session$</set> </condition> </finalized>   My problem is, this will only return the first value from the first search. I need it to send all values of session to search by. For example if the first search return multiple lines with session values A1, B2, C3; I would like to format the token to produce this search:   index=foo sourcetype=bar eventtype=specific session IN (A1,B2,C3) | stats count AS Total by session     Hopefully this is clear, let me know if it is not. Thanks!
We are working on webhook setup via Fivetran as we want to fetch data from Splunk to another platform. How can we change a number of lines as only 128 rows are pushed successfully?
MS Teams Alert Action add on is just sending first row from the output of Alert in MS teams. I have multiple rows in the output and want entire table to be sent to teams as an Alert. Please suggest h... See more...
MS Teams Alert Action add on is just sending first row from the output of Alert in MS teams. I have multiple rows in the output and want entire table to be sent to teams as an Alert. Please suggest how to configure that.  Thanks in advance for your responses. 
For example below is my raw data in sample.log file. This is a |AWS| test log testing.  The source of this file is opt/sample.log but I want to change my source from source= /opt/sample.log to so... See more...
For example below is my raw data in sample.log file. This is a |AWS| test log testing.  The source of this file is opt/sample.log but I want to change my source from source= /opt/sample.log to source=AWS which will be extracted from raw data  while indexing in splunk. props.conf [log] TRANSFORMS-sourcechange=replacedefaultsource   Transforms.conf [replacedefaultsource] WRITE_META = true SOURCE_KEY = _raw REGEX = \|(.*)\| DEST_KEY = MetaData:Source FORMAT= source::$1 Thank you in advance please help me.    
Does anyone have any experience using the IP Quality Score add-on in Splunk? I've been given very little information on how to actually run searches in the add-on and so far im not getting any result... See more...
Does anyone have any experience using the IP Quality Score add-on in Splunk? I've been given very little information on how to actually run searches in the add-on and so far im not getting any results. For instance I'm trying to use the IP Detection commands on our web traffic logs but I'm not getting any results. I just keep getting an error saying:   Exception at "/opt/splunk/etc/apps/TA-ipqualityscore/bin/ipdetection.py", line 127 : There are no events with ip field.    
I installed the Splunk App for SOAR Export app on Splunk, and I can see two alert options in manage alerts, namely 'Run Playbook in SOAR' and 'Send to SOAR'. However, when I go to add an alert action... See more...
I installed the Splunk App for SOAR Export app on Splunk, and I can see two alert options in manage alerts, namely 'Run Playbook in SOAR' and 'Send to SOAR'. However, when I go to add an alert action, these two are missing from there.  These options were available when I first installed the app, and then they were gone from the alert.
Hello, I'd like to transpose a table results by grouping by columns. Here is my table time1 event1 time2 event2 time3 event3 01/01/2022 titi 02/01/2022 toto 04/01/2... See more...
Hello, I'd like to transpose a table results by grouping by columns. Here is my table time1 event1 time2 event2 time3 event3 01/01/2022 titi 02/01/2022 toto 04/01/2022 tata   I'd like to transpose this structure in this way time content 01/01/2022 titi 02/01/2022 toto 04/01/2022 tata   I didn't find a way to solve this Thans in advance
Hello, It is possible to send metrics to event index? For instance indexing df_metric from Splunk_TA_nix Thanks.  
Hello everyone, I have following type of data to analyze: timestamp endpoint executionTime 08:12 /products 0.3 08:20 /products 0.8 08:25 /users 0.5 0... See more...
Hello everyone, I have following type of data to analyze: timestamp endpoint executionTime 08:12 /products 0.3 08:20 /products 0.8 08:25 /users 0.5 08:41 /users 1.0 08:50 /products 0.7   I would like to display information about slowest endpoint in each 30 minute window, in this example it would look like: timeWindow timestamp endpoint maxExecutionTime 08:00 08:20 /products 0.8 08:30 08:41 /users 1   It's fairly easy to gather data on maximum execution time only and so I created such a query:     index = myindex | timechart span=30m max(executionTime) as maxExecutionTime     but now I have no idea how to attach endpoint called and actual timestamp. How should I do it?
Hi Guys,   Need some help with setting up Multisite Indexer Clustering. We have two DataCenters A&B. Below is the server architecture for these datacenters: DATACENTER A We have 3 Search Head... See more...
Hi Guys,   Need some help with setting up Multisite Indexer Clustering. We have two DataCenters A&B. Below is the server architecture for these datacenters: DATACENTER A We have 3 Search Heads : SH-A,SH-B,SH-C (in a Search head cluster) and we have 2 Indexers: IDX-1, IDX-2   DATACENTER B We have 3 Disaster Recovery Search Heads: SH-A-DR,SH-B-DR,SH-C-DR (in a Search head cluster) and 2 Indexers:IDX-3, IDX-4   Now, We want to setup Indexer clustering in such a way that IDX—1 and IDX 3 are clustered IDX-2 and IDX 4 are clustered So that SH-A,B,C (in DC A) can search IDX-1 and IDX-2 While during DR SH-A-DR,B-DR,C-DR (in DC B) can search IDX 3 and IDX 4. What would be the best way to get this setup done? Do we need to setup 2 Cluster Masters? If yes, then how to setup Search Head cluster with 2 Cluster Masters. Please suggest.   Thanks, Neerav
Hello everyone ! I'm trying to split a single multivalue event into multiple multivalue events. Here is my base search : sourcetype="xxxx" | transaction clientip source id maxspan=5m star... See more...
Hello everyone ! I'm trying to split a single multivalue event into multiple multivalue events. Here is my base search : sourcetype="xxxx" | transaction clientip source id maxspan=5m startswith="yesorno=yes" endswith="event=connected" keepevicted=true mvlist=true,responsetime,status,yesorno,clientip,event,_time | sort _time | eval MergedColumns=responsetime . " " . yesorno | stats list(event) as event, list(MergedColumns) as MergedColumns, list(responsetime) as responsetime, by yesorno, clientip, id | where !(event=="connected") | table MergedColumns source clientip Unfortunately, i am obliged to use a transaction here and not the stats command. Here is my data : MergedColumns source clientip 10 yes 510 no 348 no 50886 no username1 xxx.xxx.xxx.xxx 10 yes 513 no 1239 no 9 yes 160 no 340 no 21421 no 509 no 685 no 13799 no 149 no username2 xxx.xxx.xxx.xxx I would like to split my event on the "xxx yes" like so : MergedColumns source clientip 10 yes 510 no 348 no 50886 no username1 xxx.xxx.xxx.xxx 10 yes 513 no 1239 no username2 xxx.xxx.xxx.xxx 9 yes 160 no 340 no 21421 no 509 no 685 no 13799 no 149 no username2 xxx.xxx.xxx.xxx Moreover, here, i have only two "xxx yes" in the same multivalue event, but i can possibly have more than that (like 3 or 4). I tried lots of things but none seems to work... (here is the regex to extract "xxx yes" => "^\S{1,} yes$") In fact, adding this : | mvexpand MergedColumns | regex MergedColumns="^\S{1,} success" | table MergedColumns source clientip This above seems to split my values correctly, however, it removes all the remaining "xxx no" values. Does anyone have a solution ? Kind regards,
Intro The client upgraded their Oracle DB from v12.1.0.2 to v19.15.0.0.0. The client DBAs were experiencing issues with load and caching on their DB and found queries run by the service account the... See more...
Intro The client upgraded their Oracle DB from v12.1.0.2 to v19.15.0.0.0. The client DBAs were experiencing issues with load and caching on their DB and found queries run by the service account the AppD Agent runs, to be a culprit, so they asked for the AppD DB Agent to be upgraded. Initial action Upgraded the prod, on-prem Windows DB agent from v21.2.0.2285 to v22.6.0.2803(latest available at the time). Issue 2 of the Oracle DBs stopped reporting the DB Load metrics consistently. The DB agent logs have no Error or Warning logs related to these 2 DB Collectors that can be troubleshot.  There are several other Oracle DBs on the same version, the same controller, and agents that are not experiencing the issue.  All other DB metrics are reporting in.  Further testing Tested in pre-prod by having the pre-prod DB agent (same new version as prod) and pre-prod controller(SaaS, v22.5.0-662) monitor the 2 problematic prod Oracle DBs. The same issue is present. No Load metrics. (points to an agent version issue) Monitored pre-prod equivalent Oracle DB (Same oracle version, different DB), but it does not experience the issue. (Issue isolated to specific DBs) Tested by monitoring the problematic DBs with a completely different DB agent that matches the older version that was updated away from and the issue is not present. Shows again that it's related to the new agent version.  Conclusion  The client could not carry on with missing Load metrics for DBs in question so the agent was rolled back to the older version that did not have this issue. AppD support has 2 theories on this and is still asking for more queries to be run against the prod DBs as they experience the issue, but this only happening to the prod DBs and we cannot recreate it in pre-prod, so it's not a quick thing to do. (Breaking the prod DB monitoring just to wait for the issue and then run queries) Theory 1: DB agent is not able to query the DB Theory 2: DB agent is not able to send all metrics to SaaS Controller The screenshot below shows the Load metrics not reported consistently for a busy prod Oracle DB.  Shows Load metrics not reporting in I am hoping someone else might have come across this issue as well and has a possible solution, or more evidence pointing to a possible DB agent version bug.  *Second time I created this post because my first one just went missing after I submitted it.
Hi   i have a curious problem. (btw. not my first Powershell input )  I am trying to Input some Active Directory Data into Splunk right now. Below a bit changed output of my Script:     ... See more...
Hi   i have a curious problem. (btw. not my first Powershell input )  I am trying to Input some Active Directory Data into Splunk right now. Below a bit changed output of my Script:        [ { "SpecialUsers_S": false, "SpecialUsers_X": false, "SpecialUsers_U": false, "SpecialUsers_A": false, "SpecialUsers_TBM": false, "SpecialUsers_T": false, "HR_Canceled_Users": false, "HR_Inactive_Users": false, "HR_Temporary-Inactive_Users": false, "FehlerStatus": "0", "PasswordNeverExpires_State": "null", "OU_State": "null", "Account_State": "null", "Manager_State": "null", "Account_Expiration_Date": "null", "EmployeeNumberError": "null", "DescriptionError": "null", "ManagersViaGroup": "null", "Wrong_Name": "null", "Wrong_EMail": "null", "Manager_Description": "null", "Multiple_SpecialGroups": "null", "Multiple_HR_Groups": "null", "SamAccountName": "SamAccount01", "Enabled": true, "EmployeeNumber": "11112", "SN": "Surname01", "Description": "0200000000", "Department": "Department01", "Company": "The Firm", "emailaddress": "Email01@domain.com", "DistinguishedName": "The Distinguished Name 01", "hkDS-EntryDate": "09.09.1991 02:00:00", "LastLogonDate": "18.07.2022 07:22:38", "PasswordLastSet": "02.06.2022 09:22:36" }, { "SpecialUsers_S": false, "SpecialUsers_X": false, "SpecialUsers_U": false, "SpecialUsers_A": false, "SpecialUsers_TBM": false, "SpecialUsers_T": false, "HR_Canceled_Users": false, "HR_Inactive_Users": false, "HR_Temporary-Inactive_Users": false, "FehlerStatus": "0", "PasswordNeverExpires_State": "null", "OU_State": "null", "Account_State": "null", "Manager_State": "null", "Account_Expiration_Date": "null", "EmployeeNumberError": "null", "DescriptionError": "null", "ManagersViaGroup": "null", "Wrong_Name": "null", "Wrong_EMail": "null", "Manager_Description": "null", "Multiple_SpecialGroups": "null", "Multiple_HR_Groups": "null", "SamAccountName": "SamAccount02", "Enabled": true, "EmployeeNumber": "11113", "SN": "Surname02", "Description": "000000000", "Department": "Department02", "Company": "The Firm", "emailaddress": "email02@Domain.com", "DistinguishedName": "The Distinguished Name 01", "hkDS-EntryDate": "10.10.2002 02:00:00", "LastLogonDate": "18.07.2022 08:07:31", "PasswordLastSet": "26.05.2022 17:27:42" } ]        Exported into File and testet with Validators all is fine.  But what i see in Splunk is:        "SpecialUsers_S": false, "SpecialUsers_X": false, "SpecialUsers_U": false, "SpecialUsers_A": false, "SpecialUsers_TBM": false, "SpecialUsers_T": false, "HR_Canceled_Users": false, "HR_Inactive_Users": false, "HR_Temporary-Inactive_Users": false, "FehlerStatus": "0", "PasswordNeverExpires_State": "null", "OU_State": "null", "Account_State": "null", "Manager_State": "null", "Account_Expiration_Date": "null", "EmployeeNumberError": "null", "DescriptionError": "null", "ManagersViaGroup": "null", "Wrong_Name": "null", "Wrong_EMail": "null", "Manager_Description": "null", "Multiple_SpecialGroups": "null", "Multiple_HR_Groups": "null", "SamAccountName": "SamAccount01", "Enabled": true, "EmployeeNumber": "null", "SN": "", "Description": "null", "Department": "null", "Company": "", "emailaddress": null, "DistinguishedName": "The Distinguished Name", "hkDS-EntryDate": "null", "LastLogonDate": "null", "PasswordLastSet": "null" }         As u can see i am missing a lot of information, and i cant figure out why... Some like SamAccountName and DistinguishedName is working but other variables like Company, Department or Description are missing...  Skript is rather long but if needed i can post Parts of it how i do stuff   the inputs.conf for this is:        [powershell://Get_AD_Report] script = . "$SplunkHome\etc\system\bin\Powershell\GetADReport.ps1" schedule=15 * * * * sourcetype=_json index=hk_office365         Maybe someone as some kind of clue whats happening there for me?  Would really help am on this for much to long already and tried so many different ways now... 
We are working on a table creation, where in we are just passing the SPL query to the splunk JS, which populates the table in the UI. >My problem is that im not able to hold the table headers ,when... See more...
We are working on a table creation, where in we are just passing the SPL query to the splunk JS, which populates the table in the UI. >My problem is that im not able to hold the table headers ,when scrolled it ,it does not stay. >Also if I scroll horizontally I have to go all the way to the end of the table layout to scroll horizontally. Any help or suggestion will help greatly.    Thanks, Jabez.