All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am confused about the "Expires" thing when setting an alert. I have my alert scheduled every day and the expires = 24 hours, does that mean after 24 hours the alert will NOT run no more? Tha... See more...
Hello, I am confused about the "Expires" thing when setting an alert. I have my alert scheduled every day and the expires = 24 hours, does that mean after 24 hours the alert will NOT run no more? Thank you.
Hello, I am confused about the "Expires" thing when setting an alert. I have my alert scheduled every day and the expires = 24 hours, does that mean after 24 hours the alert will NOT run no more? Tha... See more...
Hello, I am confused about the "Expires" thing when setting an alert. I have my alert scheduled every day and the expires = 24 hours, does that mean after 24 hours the alert will NOT run no more? Thank you.
Is there a native way to run scripts in pwsh.exe managed environment? It's not mentioned in docs so I believe not: https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Inputsconf   We all know... See more...
Is there a native way to run scripts in pwsh.exe managed environment? It's not mentioned in docs so I believe not: https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Inputsconf   We all know there is [powershell://<name>] in inputs.conf to run "classic" powershell scripts. Actually, it runs script in "classic" PowerShell environment. Depending on which Windows version/build Universal Forwarder is installed on, it will be PS version up to 5.1 (which is managed by powershell.exe binary btw). But now we have a brand-new PowerShell Core (managed by a different binary: pwsh.exe). PowerShell Core have new features, not available in "classic" PowerShell and they're not 100% compatible. Additionally, PowerShell Core is platform agnostic - so we can install it on Linux and run PowerShell Core based scripts there (don't ask me why anyone would do that, but it's possible). Currently I'm running PowerShell Core scripts, by starting batch script in cmd environment, then cmd starts pwsh.exe with defined parameter to run my PowerShell Core based script - not elegant at all.
I would also like to check my .crt certificates from my own Microsoft CA, is that possible?
Hello, Friends! So, I tried to change the height of the gap between these components:  But in the Edit Dashboard I didn't find anything to change this: Thank you, guys
Addon worked fine until upgrade to 9.3.1, exports to azure now is halted with an error message CRITICAL Could not connect to Azure Blob: NameError("name 'BlobServiceClient' is not defined") We've d... See more...
Addon worked fine until upgrade to 9.3.1, exports to azure now is halted with an error message CRITICAL Could not connect to Azure Blob: NameError("name 'BlobServiceClient' is not defined") We've deployed the latest 2.4.0 version available from splunkbase
Hi Team,   I am looking to create splunk app in which in setup page there will drop down which will ask for select splunk index. and with that index I want to update my savedsearches.config which... See more...
Hi Team,   I am looking to create splunk app in which in setup page there will drop down which will ask for select splunk index. and with that index I want to update my savedsearches.config which I am using to trigger alert. I have create this page like <form version="1.1" theme="light"> <label>App Setup</label> <fieldset submitButton="true"> <input type="dropdown" token="selected_index" searchWhenChanged="true"> <label>Select Index</label> <search> <query>| eventcount summarize=false index=* | dedup index | table index</query> </search> <default>ibm_defender</default> <fieldForLabel>index</fieldForLabel> <fieldForValue>index</fieldForValue> </input> </fieldset> <!-- Button Row --> <row> <button label="Submit"> <set token="form_submit">1</set> <redirect> <uri>/app/ibm_storage_defender/ibm_storage_defender_events</uri> </redirect> </button> </row> </form> but here submit button is not working setup page stay there on reload is working  also is my approach correct in my savesarches config I have configure query like #search = index="$selected_index$" source=mp-defender message="Potential threat detected" | eval rule_title="High severity alert for Potential threat events", urgency="High. Also please suggest if there is any better option for this       
Good morning fellow splunkers. I have a challenge and was wondering if anyone could help me. In some logs with multiple fields with the same label, we use eval mvindex to assign different label for ... See more...
Good morning fellow splunkers. I have a challenge and was wondering if anyone could help me. In some logs with multiple fields with the same label, we use eval mvindex to assign different label for those fields. For example, In a log, we have two fields labelled "Account Name", first one corresponding to computer account and second to user account. We use mvindex to assign labels appropriately. This works well for a known number of fields. Now, we also have logs, with groups of fields: action, module and rule:          action: quarantine          module: access          rule: verified              action: execute          module: access          rule: verified              action: continue          module: access          rule: verified                 action: reject          isFinal: true          module: pdr          rule: reject I would like to use mvindex to label those so I can use those fileds more easily. In the example above, we have four groups of those fileds, thefore I wold have: action1, action2 etc (same for module and rule). However, the number of groups changes. It could be one, two, three or more. Is there any way to use mvindex dynamically somehow? I imagine, we would have to first evaluate number of those fields (or group of fields) and then use mvindex to assign different labels? Unless there is a different way to achieve our goal. Many thnaks in advance for any advise. Kind Regards, Mike.
Hi, Is there a way to check the Splunk cloud timezone? I know by documentation its at GMT+0 and displays the data based on your configured timezone My user account is configured at GMT+8, howev... See more...
Hi, Is there a way to check the Splunk cloud timezone? I know by documentation its at GMT+0 and displays the data based on your configured timezone My user account is configured at GMT+8, however when I check the triggered alerts page. the Alerts have a CST timezone Also, in our ES incident review checking the time difference from the triggering event from the triggered alerts, it almost at 2 hrs. For reference see below.    Triggered alert in Incident review Highlighted refers to the timestamp in the triggering event from the drill down search    
Splunk has warning log: WARN AggregatorMiningProcessor [10530 merging] - Breaking event because limit of 256 has been exceeded ... data_sourcetype="my_json" The "my_json" for UF is: [my_json] DAT... See more...
Splunk has warning log: WARN AggregatorMiningProcessor [10530 merging] - Breaking event because limit of 256 has been exceeded ... data_sourcetype="my_json" The "my_json" for UF is: [my_json] DATETIME_CONFIG = KV_MODE = json LINE_BREAKER = (?:,)([\r\n]+) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = _time TIME_FORMAT = %2Y%m%d%H%M%S TRUNCATE = 0 category = Structured description = my json type without truncate disabled = false pulldown_type = 1 MAX_EVENTS = 2500 BREAK_ONLY_BEFORE_DATE = true   The data has about 5000 Lines, sample is the below: { "Versions" : { "sample_version" : "version.json", "name" : "my_json", "revision" : "rev2.0"}, "Domains" : [{ "reset_domain_name" : "RESET_DOMAIN", "domain_number" : 2, "data_fields" : ["Namespaces/data1", "Namespaces/data2"] } ], "log" : ["1 ERROR No such directory and file", "2 ERROR No such directory and file", "3 ERROR No such directory and file", "4 ERROR No such directory and file" ], "address" : [{ "index": 1, "addr": "0xFFFFFF"} ], "fail_reason" : [{ "reason" : "SystemError", "count" : 5}, { "reason" : "RuntimeError", "count" : 0}, { "reason" : "ValueError", "count" : 1} ], ... blahblah ... "comment" : "None"} How to fix this warning log? We add "MAX_EVENTS" field in props.conf, but it does not working.
Why is it that neither of the Splunk.com site dashboard examples return data for the following query:  index=main sourcetype=access_combined* status=200 action=purchase |timechart count by productid... See more...
Why is it that neither of the Splunk.com site dashboard examples return data for the following query:  index=main sourcetype=access_combined* status=200 action=purchase |timechart count by productid ? Here's what the videos say we should get: But here's what the query returns: It groups by date successfully, but doesn't yield results by product. Both of the online dashboard creation videos in the url below yield the desired results shown in the first screenshot above.   Note:  the source="tutorialdata.zip:*". Two video training sites are here: https://www.splunk.com/en_us/training/videos/all-videos.html https://www.splunk.com/en_us/blog/learn/splunk-tutorials.html#education
Is there a way to create a detector to alert if a particular user (based on a part of the URL) is experiencing a higher number of errors? For example, if I have a /user/{customerId}/do-something URL... See more...
Is there a way to create a detector to alert if a particular user (based on a part of the URL) is experiencing a higher number of errors? For example, if I have a /user/{customerId}/do-something URL, then I want to be alerted when a particular {customerId} has a high number of errors within a specific time period. If there's a higher number of errors but they're mostly for different {customerId} values, then I don't want a notification. Thanks.
Hi,       Based on the following json document, I want to find the value of "Geography" where City is input. Here is the json:       { "Company" : "Microsoft", "Cloud" : "Azure", "D... See more...
Hi,       Based on the following json document, I want to find the value of "Geography" where City is input. Here is the json:       { "Company" : "Microsoft", "Cloud" : "Azure", "DataCenters" : [ { "Geography" : "USA", "Region" : "East", "City": "New York" }, { "Geography" : "India", "Region" : "West", "City": "Ahmedabad" }, { "Geography" : "USA", "Region" : "West", "City": "San Fransisco" }, { "Geography" : "South Africa", "Region" : "West", "City": "Capetown" } ] }       Can somebody please help me fetch this information. Thanks.
Hey gang,  I'm using the Splunk Add on for Microsoft Azure to ingest AAD signin logs to Splunk under the azure:aad:signin sourcetype, however there seems to be a gap between the number of events v... See more...
Hey gang,  I'm using the Splunk Add on for Microsoft Azure to ingest AAD signin logs to Splunk under the azure:aad:signin sourcetype, however there seems to be a gap between the number of events visible in EntraID versus what is visible from Splunk. There are always slightly more events in EntraID. The gap seems to worsen the higher the volume of events becomes. See this table: Time Splunk Entra ID Difference 1st hour 3265 3305 40 2nd hour 3085 4804 1719 3rd hour 3264 6309 3045 4th hour 2274 3841 1567 5th hour 1659 2632 973 6th hour 2168 3442 1274 7th hour 6236 8923 2687 8th hour 22716 35901 13185 9th hour 63186 101602 38416 10th hour 88607 145503 56896 11th hour 68407 140095 71688 12th hour 76866 124423 47557 13th hour 68717 122355 53638 14th hour 81310 144880 63570 15th hour 50849 140876 90027 16th hour 42972 124040 81068 17th hour 33693 91792 58099 18th hour 13683 50408 36725 19th hour 13973 38695 24722 20th hour 12182 29645 17463 21st hour 9734 24187 14453 22nd hour 8037 16935 8898 23rd hour 5869 11994 6125 24th hour 5631 8837 3206 Total 688383 1385424 697041 Percentage difference     50.31%   - This gap appears even when searching historical logs i.e. time slots over the last two weeks. - The retention period of the index is 90 days, so the events should not have expired yet. - There are no line breaking, event breaking, aggregation, timestamp, or other parsing errors for the sourcetype. - The gap is still present when searching over all time. - The internal logs from the Splunk Add on for Microsoft Azure only show the following two error messages which don't seem relevant, and only appeared a few times over the last month or so: "File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunklib/binding.py", line 1337, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 503 Service Unavailable -- KV Store is in maintenance mode." "File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunklib/modularinput/event.py", line 111, in write_to stream.flush() BrokenPipeError: [Errno 32] Broken pipe" I have updated the polling interval of the Microsoft Entra ID Interactive Sign-ins input to 900 seconds, but still the issue persists. What other explanation could there be for the gap?   Thanks, K
I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there a way to have time range work for the dashboard, but also allow it to work with the re... See more...
I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there a way to have time range work for the dashboard, but also allow it to work with the reports? The issue we face is  we are able to set the reports in the studio dashboard, but the default is that they are stuck as static reports. how can we add in a time range input that will work with the dashboard and the reports?
Hi,    I have index called Index1 which has sourcetype called SourceType1 and another index called Index2 with sourceType called SourceType2. Some data is in combination of Index1<-> SourceType1 an... See more...
Hi,    I have index called Index1 which has sourcetype called SourceType1 and another index called Index2 with sourceType called SourceType2. Some data is in combination of Index1<-> SourceType1 and some data is in combination of Index2<->SourceType2.   How can I write a query that targets the correct index and sourceType?                                                  
I have a cyber security finding that states "The splunk service accepts connections encrypted using SSL 2.0 and/or SSL 3.0".  Of course SSL 2.0 and 3.0 are not secure protocols.  How do I disable SSL... See more...
I have a cyber security finding that states "The splunk service accepts connections encrypted using SSL 2.0 and/or SSL 3.0".  Of course SSL 2.0 and 3.0 are not secure protocols.  How do I disable SSL 2.0/3.0?  Can I just disable it in the browser or do I need to change a setting within splunk?
Need some assistance with creating a query where I am trying to capture the parent folder and the 1st child folder respectively from a print output log that has both windows and linux folder paths.  ... See more...
Need some assistance with creating a query where I am trying to capture the parent folder and the 1st child folder respectively from a print output log that has both windows and linux folder paths.  Sample data and folder paths I am trying to get in a capture group is in bold. _time,     username,      computer,      printer,      source_dir,      status 2024-09-24 15:32 ,   auser, cmp_auser,  print01_main1,   \\cpn-fs.local\data\program\...,          Printed 2024-09-24 13:57 ,   buser, cmp_buser,  print01_offic1,   c:\program files\documents\...,            Printed 2024-09-24 12:13 ,   cuser, cmp_cuser,  print01_offic2,   \\cpn-fs.local\data\transfer\...,            In queue 2024-09-24 09:26,    buser, cmp_buser,  print01_offic1,   F:\transfers\program\...,                           Printed 2024-09-24 09:26,    buser, cmp_buser,  print01_front1,   \\cpn-fs.local\transfer\program\...,  Printed 2024-09-24 07:19,    auser, cmp_auser,   print01_main1,   \\cpn-fs.local\data\program\....,         In queue I am currently using a Splunk query where I call these folders in my initial search, but I want to control this using a rex command so I can add an eval command to see if they were printed locally or from a server folder.  Current query is: index=printLog  source_dir IN ("\\\\cpn-fs.local\data\*", "\\\\cpn-fs.local\transfer\*",  "c:\\program files\\*", " F:\\transfer\\*" )  status== "Printed" | table status, _time, username, computer, printer, source_dir I tried using the following rex but didn't get any return:      | rex field=source_dir "(?i)<FolderPath>(?i[A-Z][a-z]\:|\\\\{1})[^\\\\]+)\\\\[^\\\\]+\\\\)" In my second effort, through Splunk I generated these two regex using the field extractor respectively.  I know I need to pipe them to add the "OR" operator when comparing the windows and Linux paths but I get an error when trying to combine them. Regex generated from windows:  c:\program files  ^[^ \n]* \w+,,,(?P<FolderPath>\w+:\\\w+) Regex generated from linux: \\cpn-fs.local\data ^[^ \n]* \w+,,,(?P<FolderPath>\\\\\w+\-\w+\d+\.\w+\.\w+\\\w+) To start, I am looking for an output which should look like what is seen below to replace the "source_dir" with the rex "FolderPath"  created _time,     username,      computer,      printer,      FolderPath,      file,    status 2024-09-24 15:32 ,   auser, cmp_auser,  print01_main1,   \\cpn-fs.local\data\,    Printed 2024-09-24 13:57 ,   buser, cmp_buser,  print01_offic1,   c:\program files\,            Printed Thanks for any help given.
Hi Splunk, I have a table like below Component Green Amber Red Resp_time 0 200 400 5xx 0 50 100 4xx 0 50 100   I want to combine them to produce single row like below Resp_time_Green  Resp_ti... See more...
Hi Splunk, I have a table like below Component Green Amber Red Resp_time 0 200 400 5xx 0 50 100 4xx 0 50 100   I want to combine them to produce single row like below Resp_time_Green  Resp_time_Amber Resp_time_Red 5xx_Green 5xx_Amber 5xx_Red 4xx_Green 4xx_Amber 4xx_Red 0 200 400 0 50 100 0 50 100
Hi Team, I have the below JSON field in the splunk event [{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_1","destinationAc... See more...
Hi Team, I have the below JSON field in the splunk event [{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_1","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"},{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_2","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"},{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_3","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"}]  just for eg: I have added 3 entries but In real we have more than 200 records in the single event in this field When im using spath to extract this data its giving blank results, the same data when tested with fewer records (<10) its able to extract all the key value pairs, is there a better way to extract from large event data ?? Please help me with the SPL query.Thanks  @yuanliu @gcusello