All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

So I'm trying to install an app on Splunk Cloud and it went through the checks and it failed due to the following error: Detected an outdated version of the Splunk SDK for Python (1.6.15). Please u... See more...
So I'm trying to install an app on Splunk Cloud and it went through the checks and it failed due to the following error: Detected an outdated version of the Splunk SDK for Python (1.6.15). Please upgrade to version 1.6.16 or later. File: bin/splunklib/binding.py Does anyone know how to upgrade the Splunk SDK for Python on Splunk Cloud?
We've starter lookin into Risk-Based Alerting (RBA) in Splunk ES, and noticed that the logic for the risk notables is in fact case sensitive for risk objects (users and systems, mostly). This is a bi... See more...
We've starter lookin into Risk-Based Alerting (RBA) in Splunk ES, and noticed that the logic for the risk notables is in fact case sensitive for risk objects (users and systems, mostly). This is a bit counterintuitive, as the Asset & Indentity (A&I) settings clearly says that it is are not case sensitive, but we figured out that RBA doesn't use A&I at all, and instead just used the fieldvalue for the user/system directly, without having any logic to merge users/systems under different aliases. I've made a small change to the RBA alert "Risk Threshold Exceeded For Object Over 24 Hour Period" to at least make it case insensitve, in case anyone else need a fix for this problem as well. Just change the two first lines for the search from this:   | tstats `summariesonly` sum(All_Risk.calculated_risk_score) as risk_score, count(All_Risk.calculated_risk_score) as risk_event_count,values(All_Risk.annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id, dc(All_Risk.annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count, values(All_Risk.annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id, dc(All_Risk.annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count, values(All_Risk.tag) as tag, values(source) as source, dc(source) as source_count from datamodel=Risk.All_Risk by All_Risk.risk_object,All_Risk.risk_object_type | `drop_dm_object_name("All_Risk")`   To this:   | tstats `summariesonly` sum(All_Risk.calculated_risk_score) as risk_score, count(All_Risk.calculated_risk_score) as risk_event_count, values(All_Risk.annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id, values(All_Risk.annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id, values(All_Risk.tag) as tag, values(source) as source from datamodel=Risk.All_Risk by All_Risk.risk_object,All_Risk.risk_object_type | `drop_dm_object_name("All_Risk")` | eval risk_object=lower(risk_object) | stats sum(risk_score) as risk_score, sum(risk_event_count) as risk_event_count, values(annotations.mitre_attack.mitre_tactic_id) as annotations.mitre_attack.mitre_tactic_id, dc(annotations.mitre_attack.mitre_tactic_id) as mitre_tactic_id_count, values(annotations.mitre_attack.mitre_technique_id) as annotations.mitre_attack.mitre_technique_id, dc(annotations.mitre_attack.mitre_technique_id) as mitre_technique_id_count, values(tag) as tag, values(source) as source, dc(source) as source_count by risk_object, risk_object_type    
section for calculation_window_telemetry in /apps/SA-ITOA/default/savedsearches.conf:     """ search = | inputlookup calculation_window_telemetry_lookup | eval zipped = mvzip('kpis.title', 'kpis.s... See more...
section for calculation_window_telemetry in /apps/SA-ITOA/default/savedsearches.conf:     """ search = | inputlookup calculation_window_telemetry_lookup | eval zipped = mvzip('kpis.title', 'kpis.search_alert_earliest' , ",") | fields - kpis.title, kpis.search_alert_earliest| mvexpand zipped | eval x = split(zipped,",") | eval kpi_title = mvindex(x, 0) | eval search_alert_earliest = mvindex(x, 1) | fields - x, zipped| eval calculation_window_{search_alert_earliest}_min = 1 | where kpi_title!="ServiceHealthScore" | fields calc* | stats sum(*) as * """     Search Query:      """ | savedsearch calculation_window_telemetry | fields calculation_window_1_min calculation_window_5_min calculation_window_15_min calculation_window_1440_min | addtotals | rename Total as data.calculationWindowUsage.predefinedWindow.totalCount | rename calculation_window_1_min as data.calculationWindowUsage.predefinedWindow.calculationWindowValueCount.calculation_window_1_min | rename calculation_window_5_min as data.calculationWindowUsage.predefinedWindow.calculationWindowValueCount.calculation_window_5_min | rename calculation_window_15_min as data.calculationWindowUsage.predefinedWindow.calculationWindowValueCount.calculation_window_15_min | rename calculation_window_1440_min as data.calculationWindowUsage.predefinedWindow.calculationWindowValueCount.calculation_window_1440_min | append [ | savedsearch calculation_window_telemetry | fields - calculation_window_1_min calculation_window_5_min calculation_window_15_min calculation_window_1440_min | addtotals | rename Total as data.calculationWindowUsage.customWindow.totalCount | rename "calculation*" as data.calculationWindowUsage.customWindow.calculationWindowValueCount.calculation*] | stats first(*) as * | fillnull | makejson version(string),data.* output=event | table event """     Current output:   { "data": { "calculationWindowUsage": { "customWindow": { "calculationWindowValueCount": { "calculation_window_1260_min": 1, "calculation_window_111_min": 1 }, "totalCount": 2 }, "predefinedWindow": { "calculationWindowValueCount": { "calculation_window_1440_min": 1, "calculation_window_15_min": 1, "calculation_window_1_min": 1, "calculation_window_5_min": 1 }, "totalCount": 4 } } } }     Expected output:     { "data": { "calculationWindowUsage": { "customWindow": { "calculationWindowValueCount": [{ "calculation_window_value": 1260, "count": 1 }, { "calculation_window_value": 111, "count": 1 }], "total_count": 2 }, "predefinedWindow": { "calculationWindowValueCount": [{ "calculation_window_value": 1, "count": 1 }, { "calculation_window_value": 5, "count": 1 }, { "calculation_window_value": 15, "count": 1 }, { "calculation_window_value": 1440, "count": 1 }], "total_count": 4, } } } }     I required output in list of dictionary, can anyone help me on this. Thank you.
The values I need are located in the field "msg". Each msg contains 3 records. I run this query and get the result as below,   index=summary | search msg="*blablabla*" | rex max_match=3 "Type=(?<... See more...
The values I need are located in the field "msg". Each msg contains 3 records. I run this query and get the result as below,   index=summary | search msg="*blablabla*" | rex max_match=3 "Type=(?<Type>.+?)\," | rex max_match=3 "Restaurant=(?<Restaurant>.+?)\," | rex max_match=3 "Date=(?<Date>.+?)\," | rex max_match=3 "status=(?<status>.+?)\," | table Date, Restaurant, Type, status   Date Restaurant Type Status 2021-03-10 2022-01-04 2021-05-01 Domino SOUTHERN RESTAURANTS TRUST MCDONALD'S A B A NEW USED USED 2021-03-11 2021-03-12 2022-02-05 KFC Domino MCDONALD'S C B A NEW NEW USED 2021-03-11 2021-12-20 2021-05-09 Rooster CYREN BAR MCDONALD'S A A B NEW USED USED 2021-03-12 2021-12-18 2021-06-22 Helo KFC MCDONALD'S A A B NEW USED USED 2021-03-12 2022-01-05 2022-01-14 KFC MCDONALD'S MCDONALD'S A A B   The question is, how can I make each record separated? I would like to use query "where restaurant=KFC" to look for specific restaurant.  
I have Splunk logs stored in this format (2 example dataset below):        {"org":"myorg","environment":"prod","proxyName":"myproxy","uriPath":"/getdata","verb":"POST","request":"\n \"city\":... See more...
I have Splunk logs stored in this format (2 example dataset below):        {"org":"myorg","environment":"prod","proxyName":"myproxy","uriPath":"/getdata","verb":"POST","request":"\n \"city\":\"irving\",\n\"state\":\"TX\",\n\"isPresent\":\"Y\"","uid":"1234"} {"org":"myorg","environment":"prod","proxyName":"myproxy","uriPath":"/getdata","verb":"POST","request":"\n\"city\":\"san diego\",\n\"state\":\"CA\",\n\"isPresent\":\"N\"","uid":"1234"}         I'm trying to find all records where isPresent is "Y". Now request is a string containing a JSON's string representation. So, I'm using a query like this:       \\"isPresent\\":\\"Y\\" uid=1234 AND request!=null       But this query is bringing up to isPresent=Y and isPresent=N records, effectively meaning that the filter is not working at all. Any idea how I can search a string to check if it contains a specific substring?    
Hello everyone, asking your help with my subsearch query. I need to find events in index="1", take from it Logon_ID, and run search query in another one index (index="2"). My current search ind... See more...
Hello everyone, asking your help with my subsearch query. I need to find events in index="1", take from it Logon_ID, and run search query in another one index (index="2"). My current search index="2" EventCode=4662 AND (Condition="1" OR Condition="2") [ search index="1" EventCode=4624 Logon_Type=3 | eval Logon_ID=lower(Logon_ID) | eval Logon_ID=mvindex(Logon_ID,-1) | fields Logon_ID] It doesn't work, as I understand main search runs only by 1 Logon_ID, though in index1 there are many Logon_ID values. What could be the reason? Thank you.
i have the following two entries   Time Event 8/16/22 1:46:22.592 PM 2022/08/16 13:46:22.592154:P_GUI_SERV06 :pbaho3 : 98(cli) : Exit Allocate Order on portfolio list [ABC_DPM_MM... See more...
i have the following two entries   Time Event 8/16/22 1:46:22.592 PM 2022/08/16 13:46:22.592154:P_GUI_SERV06 :pbaho3 : 98(cli) : Exit Allocate Order on portfolio list [ABC_DPM_MM_BALANCED] with all instruments (Thread:00000001197f4730) host = PBIPSG07source = /app/PBISG/aaa/current/msg/server.logsourcetype = prd-pbisg-server-logtimeendpos = 26timestartpos = 0 8/16/22 1:45:51.201 PM 2022/08/16 13:45:51.201360:P_GUI_SERV06 :pbaho3 : 98(cli) : Start Allocate Order on portfolio list [ABC_DPM_MM_BALANCED] with all instruments (Thread:00000001197f4730) host = PBIPSG07   The entry will start with an entry like 'Start Allocate Order' and end with "Exit Allocate Order"   how do i build a Splunk search to calculate the duration taken between those two event ?   Based on the above , i would like to build more complex search: notice that there is ':pbaho3:' , so there will be multiple users in this case is 'pbaho3' , so how do i group the entries by specific users ?
Hi, We have a CSV file with the master data where all the constants are stored and have four columns, in the Splunk query we will get one of the columns as a result.  need to change the outcome wit... See more...
Hi, We have a CSV file with the master data where all the constants are stored and have four columns, in the Splunk query we will get one of the columns as a result.  need to change the outcome with another column name from the CSV file.    Sample - We have an id like this - "58vv1578eff-985sfv294-asfd" from the query result and this need to be changed to -  2897 in the final result.    TIA.,    Regards, SM. 
Hello, I am importing Cisco Ironport data into Splunk, field "subject" contains UTF-8 encoded data with jumbled characters. Is there a way to automatically decode the subject to a string format? No... See more...
Hello, I am importing Cisco Ironport data into Splunk, field "subject" contains UTF-8 encoded data with jumbled characters. Is there a way to automatically decode the subject to a string format? Now, I am using Powershell and CyberChef application to decode manually. Kindly let me know how to solve this issue. Regards, RK
I have two REX strings that work independently... ^\S+\s(?<microService>\S+).* [supplied by previous SPLUNK answer] ...and... "(?i)^(?:[^\+]*\+){2}\d+\]\s+\"(?P<missingFileDetails>[^\"]+)" ... See more...
I have two REX strings that work independently... ^\S+\s(?<microService>\S+).* [supplied by previous SPLUNK answer] ...and... "(?i)^(?:[^\+]*\+){2}\d+\]\s+\"(?P<missingFileDetails>[^\"]+)" [generated via erex]   How can these two REX commands be merged?          
Hi All, I am trying to build a use case with the below scenarios: 1) Person A can do tasks X and Y but not task Z or, 2) Person A can do tasks Y and Z but not task X or, 3) Person A can eithe... See more...
Hi All, I am trying to build a use case with the below scenarios: 1) Person A can do tasks X and Y but not task Z or, 2) Person A can do tasks Y and Z but not task X or, 3) Person A can either do task X or task Y or task Z At no given point is Person A allowed to conduct all three tasks and at no given point should the IP addresses of Person A and Person B be the same. Information is being picked from two separate source types from the same index. The challenge is picking the IP address when Person A does tasks X and Y and Person B does task Z, how can I get the IP addresses of both Person A and Person B, so as to compare and make sure that they are two different IP addresses, keeping in mind that the IP address is only available in one source type and not the other? Any assistance on this would be appreciated ...
Hi All, I am having issues with parsing of JSON logs time format in miliseconds. This is the format of my JSON logs. {"l":1239,"t":1660632134927,"publicationMode":"NORMAL","sequenceTime":3,"reques... See more...
Hi All, I am having issues with parsing of JSON logs time format in miliseconds. This is the format of my JSON logs. {"l":1239,"t":1660632134927,"publicationMode":"NORMAL","sequenceTime":3,"requestRead":0,"returnCode":200,"ownerName":"ihsmarkit_uxssd_inbound","payload":500,"requestId":"8fe635c7-f375-466d-aa24-a9abf9512d74","entityName":"Session","publication":5,"serverTime":12,"persist":4,"deserialize":0,"mergeTime":0,"eventId":"P81.1660632134.005","headers":{"Accept":"application\/json","resource":"\/uxssd\/Session","Connection":"close","User-Agent":"Jetty\/9.4.21.v20190926","X-Forwarded-For":"10.186.77.134","Host":"feed.dev.datadelivery.info","Accept-Encoding":"gzip","Content-Length":"1607","X-Real-IP":"10.186.77.134","Content-Type":"multipart\/form-data; boundary=JettyHttpClientBoundary199ah6aq27cmi5vr"},"method":"POST","ip":"10.186.77.134","accountUniqueId":"259B35941636A89528C138288A05","format":"JSON","requestOutcome":"SUCCESS","sessionId":"2A7460F81829F7912DC0E7192B0263844A275FBC0ACE195***@dv11","params":{"data":"Bytes:500","apiKey":"2A7460F81829F7912DC0E7192B0263844A275FBC0ACE195***@dv11","dataFormat":"JSON","httpMethod":"POST","messageType":"MERGE","requestId":"8fe635c7-f375-466d-aa24-a9abf9512d74","dictionaryVersion":"1","entity":"Session"},"processed":1,"sequence":9250657,"validationTime":0,"entities":{"Session":1},"success":true,"namespace":"uxssd"} In this first line we can see the time format which is "t":1660632134927 and this is my current props.conf. [json] TIME_FORMAT = %s%3N TIME_PREFIX = t:\s+ MAX_TIMESTAMP_LOOKAHEAD = 300 BREAK_ONLY_BEFORE_DATE = true NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TRUNCATE = 200000 LINE_BREAKER = ^\{ description = mon-4637 disabled = false Time extraction is happening properly with miliseconds are not getting extracted as expected. This is the current output - 8/16/22 6:42:14.000 AM  But ideally it should be with miliseconds like example - 8/16/22 6:42:14.927 AM Please sugesst what changes I need to do in my props.conf so that miliseconds are also visible in time Splunk Event. Any help here will be highly appreciated. 
Hello, I have a URL with proper business transaction definition, lets call it '/foo'. I'm able montior it well. I want to monitor another URL which is '/foo/bared' but when I try to create a busine... See more...
Hello, I have a URL with proper business transaction definition, lets call it '/foo'. I'm able montior it well. I want to monitor another URL which is '/foo/bared' but when I try to create a business transaction and check for live preview, it is masked by the parent URL (/foo) so it doesn't hit to the seperate BT. Is there a way to achieve this? I tried also creating a custom service endpoint, but I don't see it in service endpoints home page, there is again the parent URL (/foo). Regards.
How to forward the app logs from Splunk to any  third party application( Ex: Log insights tool) Is there any feasibility, Can you please provide the steps for implementation. 
Dear Splunkers,    I am having an issue with the process of squashing fields. When searching for events with no hosts or source I don't get any results:  index=<my_index> | where isnull(sourc... See more...
Dear Splunkers,    I am having an issue with the process of squashing fields. When searching for events with no hosts or source I don't get any results:  index=<my_index> | where isnull(source) Does Splunk drop events after being squashed? Because logically, there should be events on my index that are missing the field host and source.  
Hello, When I extract fields from the structured XML files using props.conf,  it is not extracted any key/value pairs and also headers info come as an event, how I would eliminate headers info  so ... See more...
Hello, When I extract fields from the structured XML files using props.conf,  it is not extracted any key/value pairs and also headers info come as an event, how I would eliminate headers info  so it  won't show up as an event and  is there anything I am missing because of that  it's not extracting any key/value pairs . I used   [sourcename] BREAK_ONLY_BEFORE=<DSMODEL> CHARSET=UTF-8 KV_MODE=xml LINE_BREAKER=([\r\n]*)<DSMODEL> MAX_TIMESTAMP_LOOKAHEAD=24 MUST_BREAK_AFTER=\/DSMODEL> NO_BINARY_CHECK=true SHOULD_LINEMERGE=false TIME_FORMAT=%Y%m%d%H%M%S TIME_PREFIX=<TIMESTAMP> TRUNCATE=2500 category=Custom disabled=false pulldown_type=true   Any help will be highly appreciated. Thank you so much.
Hello everyone, I have 1 search head and 3 indexes with a index cluster, it worked fine until yesterday, today I can't search event, then I found this event from splunkd.log 08-16-2022 13:23:10.7... See more...
Hello everyone, I have 1 search head and 3 indexes with a index cluster, it worked fine until yesterday, today I can't search event, then I found this event from splunkd.log 08-16-2022 13:23:10.727 +0800 ERROR HttpListener [175497 TcpChannelThread] - Exception while processing request from 10.20.5.10:38210 for /services/streams/search?sh_sid=rt_scheduler__admin_U0EtQWNjZXNzUHJvdGVjdGlvbg__RMD53730174ad49bc45c_at_1660627323_2982: Connection closed by peer  10.20.5.10 is search head server, What should I do? Thanks
how to access splunk using python script when i run this code i get an error import splunklib.client as client service = client.connect(host='192.0.0.1', port=8000, username='username', password=... See more...
how to access splunk using python script when i run this code i get an error import splunklib.client as client service = client.connect(host='192.0.0.1', port=8000, username='username', password='password',verify=False) print(service) shows an error ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1129)
Dear splunk community: So i am using the following chart command: <base search> | chart count by url_path, http_status_code |  addtotals col=true to get the following search result: URL (Y-Axis... See more...
Dear splunk community: So i am using the following chart command: <base search> | chart count by url_path, http_status_code |  addtotals col=true to get the following search result: URL (Y-Axis) & http status code (X-Axis)                         200     400     500     502     Total url1                    15        5          5         5         30 url2                    10       3          6          2         21                           25       8          11         7         51 Now i need to add the percentage of each count based on the total number and display the count & percentage together like so:                         200                    400                   500               502              Total url1               15 (50%)           5 (16%)         5 (16%)        5(16%)         30 url2                10 (47%)          3 (14%)         6 (28%)         2 (9%)          21                           25 (49%)        8 (15%)         11 (21%)      7 (13%)        51 Can someone show me how to achieve this? Greatly appreciate your help in advance!  
Hey all, Working on creating some access control based on indices and running into a weird issue. When I create a custom role and grant this role all capabilities (with no role inheritance) to the ... See more...
Hey all, Working on creating some access control based on indices and running into a weird issue. When I create a custom role and grant this role all capabilities (with no role inheritance) to the specified index, I'm not able to search data inside that index. However if I create said custom role inheriting the user role, but with the exact same capabilities it then it lets me search.  I've also cloned the user role and appended the index permissions to suit my needs but experiencing the exact same issue, the cloned role has no access to the allowed indices but the second I inherit the user role it seems to work again. This behaviour is only found on our dedicated search heads. When I enable the web ui and replicate on indexers it works as expected with the custom role searching assigned indices.  Splunk Enterprise Version: 9.0.0.1   Any help would be appreciated!!! Thanks guys