All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Any suggestions would be appreciated. In the first row I would like to show only the first 34 characters and in the second row the first list 39 characters. I figured out how to only show a ... See more...
Any suggestions would be appreciated. In the first row I would like to show only the first 34 characters and in the second row the first list 39 characters. I figured out how to only show a certain number of characters for all rows, but not individual rows. | eval msgTxt=substr(msgTxt, 1, 49) | stats count by msgTxt
Recently, Enterprise Security allowed for event timestamps to be index time instead of event time. I was excited about this since it would alleviate some issues related to log ingestion delays and ou... See more...
Recently, Enterprise Security allowed for event timestamps to be index time instead of event time. I was excited about this since it would alleviate some issues related to log ingestion delays and outages. However, it appears there are some limitations which I have questions about. From the previously linked docs: Selecting Index time as the time range for a correlation search might impact the performance of the search. What is the nature of the impact, specifically? Select Index time to run a correlation search only on raw events that do not use accelerated data model fields or the tstats command in the search. Otherwise, the UI might display errors. You can update the correlation search so that it does not include any tstats commands to avoid these errors. So there is just no option to use index time with accelerated data models? Will this feature be added in the future? Drill down searches for notables might get modified when using Index time. Modified in what way? Index time filters are added after the first " | " pipe character in a search string. Index time filters do not have any effect on accelerated datamodels, stats, streaming, or lookup commands. So, custom drilldown searches must be constructed correctly when using Index time. What are index time filters? What is the correct way to construct custom drilldowns when using index time? Index time might not apply correctly to the Contributing Events search for risk notables. How might it not apply correctly? The Index time time range might not be applied correctly to the original correlation search with datamodels, stats, streaming, or lookup commands at the end of the search since the index time range is applied after the "savedseach" construct. Therefore, you must adjust the time range manually for the search. How might it not apply correctly? Is there a specific example? When you select Index time to run the search, all the underlying searches are run using the '''All Time''' time range picker, which might impact the search performance. This includes the correlation search as well as the drill-down search of the notable adaptive response action. Additionally, the drill down search for the notable event in Incident Review also uses index time. Am I understanding that first sentence correctly? What possible reason could there be to run the underlying search over "All Time"? In that case, what purpose does the alert time range serve? This seems like a massive caveat that makes index time practically unusable.  Index time seemed super promising, but the fact that you can't use it with accelerated data models, that it searches over all time, and that it could modify drilldowns in mysterious and unknown ways makes me wonder what use it actually serves? These seem like major issues, but I wanted to make sure I wasn't misunderstanding something. 
I wanted to get some clarification on how trigger conditions effect notable response actions for correlation searches in Enterprise Security. The trigger condition options are between "Once" and "For... See more...
I wanted to get some clarification on how trigger conditions effect notable response actions for correlation searches in Enterprise Security. The trigger condition options are between "Once" and "For each Result", and I believe I understand the difference. However, under them there is a little blurb that says "Notable response actions and risk response actions are always triggered for each result." To me, this essentially nullifies "Once" since the action will be triggered for each result. As a result, I fail to see how "Once" is any different than "For each Result". But surely they can't be the same. 
Hello, I'm trying to only capture and show only the time it took for the service to complete. Shown below, is is a record that says the service completed in 1901 ms.  If you could please help write... See more...
Hello, I'm trying to only capture and show only the time it took for the service to complete. Shown below, is is a record that says the service completed in 1901 ms.  If you could please help write a search query to identify and return records into my dashboard panel that exceed 1909 ms? So, for example, if there are 10 records that exceed 1900 ms, it will look something like this: GetRisk completed in 1909 ms GetRisk completed in 1919 ms GetRisk completed in 2001 ms GetRisk completed in 2100 ms As so on..... msgTxt returns: VeriskService - GetRisk completed in 1909 ms. (request details: environment: Production | desired services: BusinessOwnersTerritory | property type: Commercial xxxxx) Thank you
Hello, can't load Settings page: "Something went wrong!" Configuration page failed to load (ERR0002) Splunk Enterprise 9.1.1 (clustered) / standalone Splunk 9.0.4 Addon version 2.3.2 or 3.3 (same ... See more...
Hello, can't load Settings page: "Something went wrong!" Configuration page failed to load (ERR0002) Splunk Enterprise 9.1.1 (clustered) / standalone Splunk 9.0.4 Addon version 2.3.2 or 3.3 (same error)   Log: 07-11-2024 11:05:34.421 +0200 ERROR AdminManagerExternal [25211 TcpChannelThread] - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/splunktaucclib/rest_handler/handler.py", line 117, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/splunktaucclib/rest_handler/handler.py", line 338, in _format_all_response\n self._encrypt_raw_credentials(cont["entry"])\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/splunktaucclib/rest_handler/handler.py", line 368, in _encrypt_raw_credentials\n change_list = rest_credentials.decrypt_all(data)\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/splunktaucclib/rest_handler/credentials.py", line 289, in decrypt_all\n all_passwords = credential_manager._get_all_passwords()\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/solnlib/utils.py", line 153, in wrapper\n return func(*args, **kwargs)\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/solnlib/credentials.py", line 341, in _get_all_passwords\n return self._get_clear_passwords(passwords)\n File "/OPT/splunk/etc/apps/TA-thehive-cortex/bin/ta_thehive_cortex/aob_py3/solnlib/credentials.py", line 324, in _get_clear_passwords\n clear_password += field_clear[index]\nTypeError: can only concatenate str (not "NoneType") to str\n". See splunkd.log/python.log for more details.   Thanks for your help.  
Hello Splunker, Hope you had a great day! as per the below picture :             Q1:- I need to understand the exact process of creating the TSIDX file and its content and how actually i... See more...
Hello Splunker, Hope you had a great day! as per the below picture :             Q1:- I need to understand the exact process of creating the TSIDX file and its content and how actually it speeds the search? Q2:- Why the size of the tsidx file is bigger than the raw data itself 35% /15%? Q3:- what is the difference between tsidx file and datamodel summary? I am expecting a long answer and more details, actually i like details! Thanks in advance!  
  source=*.log host=myhostname "provider=microsoft" "status=SENT_TO_AGENT" | timechart dedup_splitvals=t limit=10 useother=t count AS "Count of Event Object" by provider format=$VAL$:::$AGG$ | field... See more...
  source=*.log host=myhostname "provider=microsoft" "status=SENT_TO_AGENT" | timechart dedup_splitvals=t limit=10 useother=t count AS "Count of Event Object" by provider format=$VAL$:::$AGG$ | fields + _time, "*"   This will display a count of entries in the logs that say "SENT_TO_AGENT" I want to display an average line chart for previous 3 months, and the current month as an overlay over the previous months. 
Hi Splunkers, I am trying to extract a string within a string, which has been repeated, with the addition of some pre- and -post fixes, only the very start and end of the string are static values ... See more...
Hi Splunkers, I am trying to extract a string within a string, which has been repeated, with the addition of some pre- and -post fixes, only the very start and end of the string are static values ('AZ-' and '-VMSS'). Example data: AZ-203-dev-app-1-build-agents-203-dev-app-1-build-agents0006GA-1720624093-VMSS AZ-eun-dev-005-pqu-ado-vmss-eun-dev-005-pqu-ado-vmss005X89-1720625975-VMSS AZ-DEV-CROSS-SUBSCRIPTION-PROXY-EUN-BLUE-DEV-CROSS-SUBSCRIPTION-PROXY-EUN-BLUE000000-1720637733-VMSS   I have a working rex command to extract the relevant data (temp_hostname4): | rex field=source_hostname "(?i)^AZ(?<cap1>(-[A-Z0-9]+)+)(?=\1[A-Z0-9]{6})-(?<temp_hostname4>([A-Z0-9]+-?)+)-\d{10}-VMSS$"   Which correctly extracts: 203-dev-app-1-build-agents0006GA eun-dev-005-pqu-ado-vmss005X89 DEV-CROSS-SUBSCRIPTION-PROXY-EUN-BLUE000000   But let's face it, this is horrible! According to regex101 this takes 46K+ steps, which can't be nice for Splunk to apply to c.20K records several times per day. Can anyone suggest optimisations to bring that number down?   For added complication (and for clarity to anyone reading this) it's temp_hostname4 because there are multiple other ways the hostname might have been... manipulated before it gets to Splunk, sometimes with the string repeated, sometimes not, resulting in the following SPL - I could use coalesce rather than case, but that's hardly important right now, and separating the regex statements seemed like the saner thing to do in this instance | rex field=source_hostname "(?i)^AZ(?<cap1>(-[A-Z0-9]+)+)(?=\1[A-Z0-9]{6})-(?<temp_hostname4>([A-Z0-9]+-?)+)-\d{10}-VMSS$" | rex field=source_hostname "(?i)^AZ-(?<temp_hostname3>[^.]+)-\d{10}-VMSS$" | rex field=source_hostname "(?i)^AZ-(?<temp_hostname2>[^.]+)-\d{10}$" | rex field=source_hostname "(?i)^(?<temp_hostname1>[^.]+)_\d{10}$" | eval alias_source_of=case( !isnull(temp_hostname4), temp_hostname4, !isnull(temp_hostname3), temp_hostname3, !isnull(temp_hostname2), temp_hostname2, !isnull(temp_hostname1), temp_hostname1, 1=1, null() ) Any suggestions for optimisations of the regex would be gratefully appreciated.
I am unable to find and add-on or app in Splunkbase for getting ScienceLogic events into Splunk.  Does anybody have a solution for getting ScienceLogic metrics/events into Splunk?
Hello Splunkers, I have question, I'm trying to configure a custom role in Splunk where I'm assigning capabilities natively.  I'm recreating the default capabilities assigned to User in Splunk Enter... See more...
Hello Splunkers, I have question, I'm trying to configure a custom role in Splunk where I'm assigning capabilities natively.  I'm recreating the default capabilities assigned to User in Splunk Enterprise and itoa_user in Splunk ITSI without using the inheritance option (doing this as a test so I can later remove capabilities as I need to).  The problem I have is that once I save the role with all 65 matching capabilities selected and login as the testuser assigned to that role, dashboards that use the "getservice" command in their searches do not work and display the following error: [subsearch]: command="getservice", [HTTP 403] Client is not authorized to perform requested action; https://127.0.0.1:8089/servicesNS/nobody/SA-ITOA/storage/collections/config/itsi_team This issue does not happen when I simply select Inherit capabilities for User and itoa_user. Any ideas as to what could be causing this issue? I'm running splunk version 9.1.1
Anyone able to successfully run Independent Stream Forwarder on Fedora or Debian? I have inherited a small stand-alone, bare-metal Splunk Enterprise 9.1.2 running on Fedora 39. I'm trying to point a ... See more...
Anyone able to successfully run Independent Stream Forwarder on Fedora or Debian? I have inherited a small stand-alone, bare-metal Splunk Enterprise 9.1.2 running on Fedora 39. I'm trying to point a Netflow stream at the ISF installed on this same server but I'm getting blank screens in Distributed Forwarder Manager and Configure Streams on the Splunk Stream App that is also installed on the same server. Thank you!
Hello, I have successfully configured the Splunk Universal Forwarder on a Windows machine to send WinEventLog: System, Security, and Application logs to a specific index. Now, I need to include logs... See more...
Hello, I have successfully configured the Splunk Universal Forwarder on a Windows machine to send WinEventLog: System, Security, and Application logs to a specific index. Now, I need to include logs from sourcetype = 'ActiveDirectory'. Could you please guide me through the necessary steps to specify the index for Active Directory logs in the configuration files inputs.conf [WinEventLog://Application] disabled=0 index = test  
I am creating a script that uses the CLI to create/delete Splunk roles. So far, I have been successful with creating them in the script when I use the admin user. However, my CISO says that I can't ... See more...
I am creating a script that uses the CLI to create/delete Splunk roles. So far, I have been successful with creating them in the script when I use the admin user. However, my CISO says that I can't use the Splunk admin user and I need to create a Splunk User (and a Splunk Role) that can create and delete indexes. I have tried adding the indexes_edit capability and when I tried doing the delete as my user, Splunk said that I needed to have the list_inputs capability. i have also tried adding access to all indexes. I am using this document at the moment for my guidance, but it is rather light on detail: https://docs.splunk.com/Documentation/Splunk/latest/Security/Rolesandcapabilities The command that i am running is: curl -k -u editor-user:MyPasword1 https://localhost:8089/servicesNS/admin/myapp/data/indexes -d name=newindex I get the following: <response>   <messages>     <msg type="ERROR">Action forbidden.</msg>   </messages> </response> This command succeeds if I use the admin user, but not with my editor user. The current capabilities that I have to my existing editor role are:   [role_editor] admin_all_objects = disabled edit_roles = enabled indexes_edit = enabled list_inputs = enabled srchIndexesAllowed = * srchMaxTime = 8640000 srchTimeEarliest = -1 srchTimeWin = -1   Does anyone know what extra capabilities I need, please?
Hello,   I have this data set:       event, start_time, end_time EV1, 2024/07/11 12:05, 2024/07/11 13:05 EV2, 2024/07/11 21:13, 2024/07/12 04:13 EV3, 2024/07/13 11:22, 2024/07/14 02:44      ... See more...
Hello,   I have this data set:       event, start_time, end_time EV1, 2024/07/11 12:05, 2024/07/11 13:05 EV2, 2024/07/11 21:13, 2024/07/12 04:13 EV3, 2024/07/13 11:22, 2024/07/14 02:44         I need to split the time intervals into hourly intervals. One interval for each row. Eventually I'm looking for an output like this:       event, start_time, end_time EV1, 2024/07/11 12:05, 2024/07/11 13:00 EV1, 2024/07/11 13:00, 2024/07/11 13:05 EV2, 2024/07/11 21:13, 2024/07/12 22:00 EV2, 2024/07/11 22:00, 2024/07/12 23:00 EV2, 2024/07/11 23:00, 2024/07/12 00:00 EV2, 2024/07/11 00:00, 2024/07/12 01:00 EV2, 2024/07/11 01:00, 2024/07/12 02:00 EV2, 2024/07/11 02:00, 2024/07/12 03:00 EV2, 2024/07/11 03:00, 2024/07/12 04:00 EV2, 2024/07/11 04:00, 2024/07/12 04:13 EV3, 2024/07/13 11:22, 2024/07/14 12:00 EV3, 2024/07/13 12:00, 2024/07/14 13:00 EV3, 2024/07/13 13:00, 2024/07/14 14:00 EV3, 2024/07/13 14:00, 2024/07/14 15:00 EV3, 2024/07/13 15:00, 2024/07/14 16:00 EV3, 2024/07/13 16:00, 2024/07/14 17:00 EV3, 2024/07/13 17:00, 2024/07/14 18:00 EV3, 2024/07/13 18:00, 2024/07/14 19:00 EV3, 2024/07/13 19:00, 2024/07/14 20:00 EV3, 2024/07/13 20:00, 2024/07/14 21:00 EV3, 2024/07/13 21:00, 2024/07/14 22:00 EV3, 2024/07/13 22:00, 2024/07/14 23:00 EV3, 2024/07/13 23:00, 2024/07/14 00:00 EV3, 2024/07/13 00:00, 2024/07/14 01:00 EV3, 2024/07/13 01:00, 2024/07/14 02:00 EV3, 2024/07/13 02:00, 2024/07/14 02:44         I tried using bin or timechart command but they don't work. Do you guys have any sugestion?     Thank you, Tommaso
i have a search in my query where i spool data from an API but then the collect command does not allow me to save the search into my index. Any ideas?
why is inner join not working , Both searches are giving results. | inputlookup ABCD.csv | eval CC=mvdedup(CC) | rename CC as "Company Code" | streamstats first(lastchecked) as scan_check | ev... See more...
why is inner join not working , Both searches are giving results. | inputlookup ABCD.csv | eval CC=mvdedup(CC) | rename CC as "Company Code" | streamstats first(lastchecked) as scan_check | eval key=_key,is_solved=if(lastchecked>lastfound OR lastchecked == 1,1,0),solved=finding."-".is_solved."-".key,blacklisted=if(isnull(blfinding),0,1),scandate=strftime(lastfound,"%Y-%m-%d %H:%M:%S"),lastchecked=if(lastchecked==1,scan_check,lastchecked),lastchecked=strftime(lastchecked,"%Y-%m-%d %H:%M:%S") | fillnull value="N.A." Asset_Gruppe Scan-Company Scanner Scan-Location Location hostname "Company Code" | search (is_solved=1 OR is_solved=0) (severity=informational) blacklisted=0 Asset_Gruppe="*" Scan-Company="*" Location="*" Scanner="*" dns="*" pluginname="*" ip="*" scandate="***" "Company Code"="*" | rex field=scandate "(?<new_date>\A\d{4}-\d{2}-\d{2})" | sort 0 -new_date | eventstats first(new_date) as timeval | rex field=new_date "-(?<date_1>\d\d)-" | rex field=timeval "-(?<date_2>\d\d)-" | strcat finding "#" NessusHost sid hostid pluginid finding | where date_1=date_2 | fields dns ip lastchecked severity pluginid pluginname scandate Asset_Gruppe Location Scan-Company "Company Code" Scan-Location solved Scanner finding | rename dns as Hostname,ip as IP | join type=inner Hostname [|inputlookup device.csv | table Hostname]
Hello all, I've run into a problem with the backfill upon creating (also tried cloning) a KPI in regards to Splunk License Metrics using the following search:   index=_internal source=*license_usag... See more...
Hello all, I've run into a problem with the backfill upon creating (also tried cloning) a KPI in regards to Splunk License Metrics using the following search:   index=_internal source=*license_usage.log type="Usage" | fields idx, b | eval indexname = if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin span=5min _time | stats sum(b) as b by indexname, _time | eval GB=round(b/1024/1024/1024, 3) | fields _time, indexname, GB The Use Case:  I want a KPI for the License Usage with the separate Indexes as Entities.  Configuration info:   Seeing as I want the License Info on an per Index-Basis I konfigured the KPI to be split into Entities by the field "indexname". As for the Frequency and Calculation I selected:   Calculating Maximum of GB per entity as entity value,  Sum of entity value as aggregate over the last 5 minute(s) every 5 minute(s). Fill gaps in data with Null values and use a unknown threshold level for them. So far so good... now I also configured a Backfill for the last 30 days (taxing on the system but it should manage). The Problem: Upon seeing the Message that the backfill was completed, I checked the itsi_summary Index and found the backfill data of the KPI  but with regular gaps. More precisely, for each day it had backfilled the data from the activation time of the kpi (here 12:30) for about 6h (18:25/18:30) and then there were no further values for the day until the next day around 12:30. Even though there is license usage during the gap times and also available in the license_usage.log used by the KPI search.  The Data since activation is continuous and has no gaps.  I tried cloning the KPI, remaking the KPI with both adhoc or base search, but all featured the same curious results (just with different starting points as the activation time of the KPI was different). Thus now I am wondering if there is some sort of limit for backfilling or if perhaps someone has an idea what caused this strange backfill behaviour? (Also there was no error message in the _internal index as far as I could tell.)  Help and ideas would be appreciated. Thanks in advance. 
Hi, I'm facing an issue with 5 hosts, recently we change the hostname of these machines but it is not reflected in the host field, in the host field the old hostname is shown. Below is a sample log... See more...
Hi, I'm facing an issue with 5 hosts, recently we change the hostname of these machines but it is not reflected in the host field, in the host field the old hostname is shown. Below is a sample log: "LogName=Security EventCode=4673 EventType=0 ComputerName=A0310PMTHYCJH15.tnjhs.com.pk host = A0310PMNIAMT05    source = WinEventLog:Security     sourcetype = WinEventLog " We are receiving logs from these windows hosts through UF and I checked the apps deployed in these hosts and checked the inputs.conf, hostname field is not defined. The new hostname is shown in the logs in the field ComputerName. Any suggestions to this problem would be appreciated.
Our scenario in new deployment: One indexer server (Windows) (+one separate Windows server as search head) One SC4S in Linux Two customers One customer with Windows / Linux servers, Win servers ... See more...
Our scenario in new deployment: One indexer server (Windows) (+one separate Windows server as search head) One SC4S in Linux Two customers One customer with Windows / Linux servers, Win servers Security log data sent to Indexer with Universal forwarder installed to all servers, Linux servers sec log data sent to SC4S and then to indexer Second customer with Windows / Linux servers, ESX, NW devices etc. Win servers log data sent to indexer with Universal forwarder installed to all servers, Linux and other sec log data sent to SC4S and then to indexer. Both customers Universal forwarder data coming to the same default port 9997, SC4S sending to 514 Data from customers should be separated to two different indexes Only differentiating thing in these customers is the IP address segments where the data is coming in. I thought, that separating log data according to the sending devices ip- address would be a quite straight forward scenario, but so far I have tested with several options in props / transforms suggested in the community pages and read documentation, and none of the solutions have been successful, all data is deposited to the “main” index. If I put in indexes.conf defaultDB = <index name>, the logs are sent to this index, so the index itself is working and I can do searches in that index, but then all data would go to the same index… What then is the correct way to separate data into two different indexes according to the sending devices IP- address or better still according to IP segment? As I’m really new to Splunk, I do appreciate all advice if somebody here has done something similar and has insight on how to accomplish such a feat.  
Hello wonderful Splunk community, I have some data where I want count to change only when status changes: Status   Count ------------------- Online      1 Online      1 Online     1 Break ... See more...
Hello wonderful Splunk community, I have some data where I want count to change only when status changes: Status   Count ------------------- Online      1 Online      1 Online     1 Break      2 Break       2 Online       3 Online       3 Lunch       4 Lunch        4 Lunch       4 Offline     5 Offline    5 Any help appreciated.