All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have an index which searches across 10 hosts. I am comparing 2 strings and evaluating the results to see if there is an issue that needs alerting. The results are being messed up as its evaluating ... See more...
I have an index which searches across 10 hosts. I am comparing 2 strings and evaluating the results to see if there is an issue that needs alerting. The results are being messed up as its evaluating the 2 strings from 2 different boxes - ie string1 on hostA against string2 on hostB.  It should be string1 on host1 against string2 on host1. Then go to host2 , host3 ..... host10. How can i get my search to either sequentionally go through each host or have the results group by host to not show results that have come from 2 diff hosts.  current logic is:  index="abc" "string1 OR string2" | transaction startswith="string1" endswith="string2" | where count>2
Hello, I have a field 'narrative' which contains long strings describing what happened to a piece of equipment.  Within that string in various locations, there is a substring that identifies the pie... See more...
Hello, I have a field 'narrative' which contains long strings describing what happened to a piece of equipment.  Within that string in various locations, there is a substring that identifies the piece of equipment (Yes, it would be much better to have this as a defined field on its own, no I don't know why the sysadmins set it up this way, I just inherited it).  The equipment identifier is a 16 character string, and the 5th and 6th characters are always the state abbreviation (ex. NJ for New Jersey, TX for Texas, etc.).  It's not always the first substring within the field, so I can't just count to the first 5:6 characters. Example: [may or may not be data here] 1234NJ56ABCD1234 [maybe some more data here] I want to extract that 16 char substring that has a valid state abbreviation into a new field called "equip_id".  I've tried rex narrative= "(\d{5}|\w{5})?(?<equip_id>\w{1})" but it is so far failing, and plus I think this would only get the 5th char.  Plus I can't figure out where to put in the list of acceptable things to match against.   Any help appreciated.
Good Day, I am trying to come up with ideas to translate a Sumo Trasactional search with (States) Conditions to a Splunk Query.  If anyone can provide some other options, please let me know.   Here ... See more...
Good Day, I am trying to come up with ideas to translate a Sumo Trasactional search with (States) Conditions to a Splunk Query.  If anyone can provide some other options, please let me know.   Here is my sample Sumo search: _sourceCategory=prod/app/m/* and "statement" and ("Search Keys" or "STATUS=ERROR" or Error) | parse "[ID=*]" as MID nodrop | transactionize MID (merge MID takeFirst, _raw join with "\n\n") | transaction on ORGID, EVENT, ORDER, FACILITY with "*A request to obtain a channel subscription failed*" as NO_SUB, with "*M cannot be discontinued*" as NO_DC, with "*Person not found*" as NO_PERSON | (NO_SUB + NO_DC + NO_PERSON) as Total | fields ORGID, EVENT, ORDER, FACILITY, Total, NO_SUB, NO_DC,NO_PERSON | sort by Total, ORGID, EVENT, ORDER //| sort by ORGID, EVENT Splunk Search so far: index=hhh_m_prod sourcetype=mirth* MID=* CID=* acctnumber=* facility=* orgid=* "Statement" ("Search Keys" OR "STATUS=ERROR" OR "Error") | fillnull value="NULL" | transaction MID | eval NO_DC=if(match(_raw, "M cannot be discontinued*"), "Yes", "No") | eval NO_SUB=if(match(_raw, "A request to obtain a channel subscription failed*"), "Yes", "No") | eval NO_PERSON=if(match(_raw, "Person not found*"), "Yes", "No") | transaction ORGID EVENT ORDER FACILITY | eval Total=sum(NO_SUB, NO_DC, NO_PERSON | table ORGID, EVENT, ORDER, FACILITY, Total, NO_SUB, NO_DC,NO_PERSON  | sort by Total ORGID EVENT ORDER | sort by ORGID, EVENT ** I am lost for ideas in running the conditional transaction statements... Should I use more eval statements, or setup a transactiontypes.conf?  
I want to provide read permission for only one app not all apps to a particular role and in my environment under apps permissions, I can see everyone(all roles) have read access. I don't want to make... See more...
I want to provide read permission for only one app not all apps to a particular role and in my environment under apps permissions, I can see everyone(all roles) have read access. I don't want to make changes to all apps permission but wanted to manage if I can configure in one role or one app permissions so that all users under that role should only have read permission to one app and he won't be able to see other apps.
What happens when we hit something in Splunk search bar, what mechanism is followed
Hi All, We are getting the IP address in the logs. But we are unable to find ISP/Domain based on IP Address . Can you please help on the same. Suggest a way  in Splunk to identify the same.
Greetings!! Need your advice and opinions on the following points: - What training can I take to master splunk admin troubleshooting issues and complete the admin training package, - Is there a ... See more...
Greetings!! Need your advice and opinions on the following points: - What training can I take to master splunk admin troubleshooting issues and complete the admin training package, - Is there a way to set or have a simulator that can help a team or someone to have a test environment to practice more on splunk troubleshooting and not the Live environment, is there any advice on this to still be able to play with the simulator or how to set up the test environment? Kindly  need your advice on these, Thank you in advance.
I want remove everything after "-" and any digit for example -1,-2,-3...-9,-0  I'm using rex function but not getting desired output. current data Desired o/p splunk.server-1.9.0.CLIEN-serve... See more...
I want remove everything after "-" and any digit for example -1,-2,-3...-9,-0  I'm using rex function but not getting desired output. current data Desired o/p splunk.server-1.9.0.CLIEN-server-38444 splunk.server abcd-available.server-7.0.0.RETCAR-server-75344 abcd-available.server
I recently inherited a newly configured Splunk Enterprise 8 environment after the former admin left. I have a basic user level knowledge of Splunk so I will describe my issue the best I can. When w... See more...
I recently inherited a newly configured Splunk Enterprise 8 environment after the former admin left. I have a basic user level knowledge of Splunk so I will describe my issue the best I can. When we try to search for a specific or wildcard event (ie: print logs) we only receive results from the Linux servers but not the Windows servers. I was suggested to check the .conf files for Windows TA, but I'm not quite sure what I should be looking for within the files. The Splunk documentation site has been helpful, however it doesn't explain why we aren't seeing events. Splunk is installed on RHEL8 and we have installed forwarders on all the servers. I do not know where to go from here. Any assistance is appreciated.  *Note: Former admin claimed that the server was fully configured in accordance with DIA's required auditable event list. The server is receiving data however it is not being disseminated properly. 
Hi, I have created an app in Azure given the permissions to the Office 365 management activity API and also created the Microsoft Office 365 Reporting Add-on in Splunk. The results when searching is... See more...
Hi, I have created an app in Azure given the permissions to the Office 365 management activity API and also created the Microsoft Office 365 Reporting Add-on in Splunk. The results when searching is not covering the fields i want. I want to get the subject of the email which Defender for O365 has triggered an alert on. Is the API sending the data? If yes, where is the fields stuck? Br, Robar
So far, this is one of the only ways i've figured out how to change the onclick of the trellis single value view so that the entire block is clickable (like in ITSI) on the dashboard, i create a sin... See more...
So far, this is one of the only ways i've figured out how to change the onclick of the trellis single value view so that the entire block is clickable (like in ITSI) on the dashboard, i create a single value search, with trellis view, and colors inverted so the entire square shows the color status, AND finally I added script="mycode.js" to the form tag, and  id="main" to it's search tag in the xml: <form theme="dark" script="mycode.js"> <search id="main">   I also added styling to force the labels down into the blocks: <panel> <html> <p/> <style> .facet-label {top:30% !important;} .facet-label {z-index:1 !important;} .facet-label {font-size:20px !important;} .facet-label {font-size:1.125vw !important;} .facet-label {display:inline !important;} text.single-result {z-index:1 !important;} </style> </html> </panel> Then i placed the js here: /opt/splunk/etc/apps/search/appserver/static/mycode.js mycode.js: require(['splunkjs/mvc','splunkjs/mvc/simplexml/ready!','splunkjs/ready!'], function(mvc){ var main = mvc.Components.get('main'); main.on("search:progress", function() { $('.svg-container').on("click", function (e) { $($('a.single-drilldown', $(e.currentTarget))[0]).click(); }); }); main.on("change", function() { $('.svg-container').on("click", function (e) { $($('a.single-drilldown', $(e.currentTarget))[0]).click(); }); }); main.on("data", function() { $('.svg-container').on("click", function (e) { $($('a.single-drilldown', $(e.currentTarget))[0]).click(); }); }); }); Final result: (each block is clickable and leads to the intended drilldown) Now, the question is... why cant I just cram in the JS like below: $('.svg-container').on("click", function (e) { $($('a.single-drilldown', $(e.currentTarget))[0]).click(); }); why am i being forced to use mvc at all?
hi splunk tells me that "the arguments to the case function are invalid" what is wrong please?   | eval site=case(site=="SAINT MAXIMIN LA SAINTE BA", "SAINT MAXIMIN LA SAINTE BAUME", 1)    
I have a speedtest from ookla that runs every 30 min and returns results from 3 servers.  2022-02-02T08:00:26.000-0300,94.02204,94.28108,NETSEG FIBRA 2022-02-02T08:00:51.000-0300,304.676784,153.2723... See more...
I have a speedtest from ookla that runs every 30 min and returns results from 3 servers.  2022-02-02T08:00:26.000-0300,94.02204,94.28108,NETSEG FIBRA 2022-02-02T08:00:51.000-0300,304.676784,153.272304,Oi 2022-02-02T08:01:17.000-0300,303.109696,151.48468,LinQ Telecom 2022-02-02T08:30:25.000-0300,94.107144,93.58704,NETSEG FIBRA 2022-02-02T08:30:49.000-0300,304.835216,153.044024,Oi 2022-02-02T08:31:16.000-0300,275.610992,153.0804,LinQ Telecom Here is my search: sourcetype="SpeedTest" | convert num(download.bandwidth) as D_bnd | convert num(upload.bandwidth) as U_bnd | eval dmbs=D_bnd*8/1000000 | eval umbs=U_bnd*8/1000000 | table _time dmbs umbs This is the basic result, I don't want to to an avg(dmbs) so timechart wont work that I am aware of.  What I would like is to do like a span=30m to join these while showing a label for the server.name for each bar. Is this possible or do I have to make three chart searches then combine somehow? Expected Result I am trying to make is like a time chart avg(dmbs) span=15 but with each server.name in series so I can overlay them or use the trellis layout and aggregate them on the server.name while still showing the up/down speed. i don't care if the up/down is side by side or stacked. The span will eliminate the gap between the times (30min). I did this on one server.name and works fine but want to combine all three server.name in one chart in different x data points.     
Hi dear community! I'm trying to build the dashboard using records in two states STATE1 and STATE2. I'm logging state changes so in logs I have these lines: RECORD_<record_id>_CHANGED_STATE_TO STAT... See more...
Hi dear community! I'm trying to build the dashboard using records in two states STATE1 and STATE2. I'm logging state changes so in logs I have these lines: RECORD_<record_id>_CHANGED_STATE_TO STATE1 RECORD_<record_id>_CHANGED_STATE_TO STATE2 To get all the records in STATE2 I use this and it works well:    index=... source=... "CHANGED_STATUS_TO STATE2" | eval action="STATE2" | timechart count by action span=200m     To get the records in STATE1 I need to grep all the records "CHANGE_STATUS_TO STATE1" but filter it removing all the records that are already in STATE2 I was trying to extract record_id and use subsearch like this but seems I'm doing something wrong:   index=... source=... NOT record_id+"CHANGED_STATUS_TO STATE2" [search index=... source=... | rex "(?<record_id>\d+)_CHANGED_STATUS_TO STATE1" | fields record_id] | eval action="STATE1" | timechart count by action span=200m     Could you please help me?
Hi! We deployed 2 new servers on Linux and cteated from them SHcluster. Captain: dynamic_captain : 1 elected_captain : Wed Feb 2 12:03:47 2022 id : 4F8CBE82-6302-4582-9FB7-48AC452AA43D initiali... See more...
Hi! We deployed 2 new servers on Linux and cteated from them SHcluster. Captain: dynamic_captain : 1 elected_captain : Wed Feb 2 12:03:47 2022 id : 4F8CBE82-6302-4582-9FB7-48AC452AA43D initialized_flag : 1 label : sphome5 mgmt_uri : https://%%%%%%%%% min_peers_joined_flag : 1 rolling_restart_flag : 0 service_ready_flag : 1 Members: sphome5 label : sphome5 mgmt_uri : https://%%%%%%%%% mgmt_uri_alias : https://%%%%%%%%% status : Up sphome6 label : sphome6 last_conf_replication : Wed Feb 2 12:07:49 2022 mgmt_uri : https://%%%%%%%%% mgmt_uri_alias : https://%%%%%%%%% status : Up After creating the SHcluster, it is assigned a long ID (4F8CBE82-6302-4582-9FB7-48AC452AA43D). At now we need add older indexers via Search peers=>new search peers etc. Older indexers deployed on Windows Servers 2016 and 2019 When I try add new search peer after create i see that replication status is Failed. In file splunkd.log on indexer i see following error: BundleDataProcessor - File length is greater than 260, File creation may fail for C:\Program Files\Splunk\var\run\searchpeers\4F8CBE82-6302-4582-9FB7-48AC452AA43D-1643781827.82278d5fd22b73a1.tmp\apps\TA-alfa-telegram--connector\bin\ta_alfa_telegram_connector\aob_py2\cloudconnectlib\splunktacollectorlib\data_collection\ta_checkpoint_manager.py How we can change ID SHcluster (6302-4582-9FB7-48AC452AA43D) to shorter? And additional if is it inposable Could you please give any recomendation how we can solution this problem?
I have One primary index which contains 30 days logs, but i want from one year for this purpose i created One more splunk summary index where i can copy One month Data (Jan) ,after one month again on... See more...
I have One primary index which contains 30 days logs, but i want from one year for this purpose i created One more splunk summary index where i can copy One month Data (Jan) ,after one month again one month data (Feb) likewise... So how can i copy one month data from One index to another 
Hello @Anonymous  Please help me out here. I was trying to extract a field "faultDescription". but the logs have different format for each event.  event 1: "faultDescription" : "Backend system err... See more...
Hello @Anonymous  Please help me out here. I was trying to extract a field "faultDescription". but the logs have different format for each event.  event 1: "faultDescription" : "Backend system error has occurred.", event 2 : <soafault:faultDescription>SKU is not provided</soafault:faultDescription> event 3: <soafault:faultDescription>SKU is not provided</soafault:faultDescription>   i have tried below rex command, and it works for event 1 and 3. But how to write a command which will extract fault description in all 3 formats. There is space between faultdescription" and : hence not able to write a expression which will involve all 3 events   i have tried : | rex field=_raw "faultDescription+.\s:\s(?P<description>.+)," 
Hello,   I would like to summary index some data from heaving searches. The savedsearch is     [Summary - servizi BIND by Clients] action.email.useNSSubject = 1 action.summary_index = 1 action.s... See more...
Hello,   I would like to summary index some data from heaving searches. The savedsearch is     [Summary - servizi BIND by Clients] action.email.useNSSubject = 1 action.summary_index = 1 action.summary_index._name = summary_ldap action.summary_index.instance = servizi action.summary_index.type = client alert.track = 0 cron_schedule = 0 2 * * * description = BIND by clients on servizi dispatch.earliest_time = -1d@d dispatch.latest_time = @d display.general.timeRangePicker.show = 0 enableSched = 1 schedule_window = 60 search = index=ldap sourcetype=ldap:syslog instance=servizi\ [search index=ldap sourcetype=ldap:syslog instance=servizi op_type=BIND\ | where dn!="cn=replication manager,cn=config"\ | fields conn host instance]\ | where in(op_type,"connection","closed","BIND")\ | transaction maxopenevents=-1 maxopentxn=-1 maxevents=-1 mvlist=t startswith=client_ip=* conn host instance\ | lookup dnslookup clientip as client_ip OUTPUT clienthost as client\ | eval client_ip=mvindex(client_ip,mvfind(client_ip, "^\d+")), client=if(isnull(client),client_ip,client), dn=lower(dn)\ | mvexpand dn\ | where dn!="null"\ | sichart count(dn) over dn by client       The subsearch returns million of results, so I have already increased maxout (and maxresultrows in [searchresults]. The problem is that this search never ends. In the "Job Manager" the search is always in parsing mode.  This search is still running and is approximately 0% complete. (SID: scheduler__nobody__DS389__RMD58ebfeb8123af6f21_at_1643763600_45904)  The search log terminated with: 02-02-2022 02:02:24.087 INFO SearchOperator:kv [182237 searchOrchestrator] - Extractor stats: name=dnexplicitanom, probes=225, total_time_ms=5, max_time_ms=1 02-02-2022 02:02:24.091 INFO ISearchOperator [182237 searchOrchestrator] - 0x7f3315be3400 PREAD_HISTOGRAM: usec_1_8=6754 usec_8_64=7150 usec_64_512=76 usec_512_4096=25 usec_4096_32768=28 usec_32768_262144=1 usec_262144_INF=0 02-02-2022 02:02:24.092 INFO SearchStatusEnforcer [182237 searchOrchestrator] - SearchStatusEnforcer is already terminated   It's all INFO rows, no WARN, no ERROR. Looking at the log I see that the subsearch terminated well:   audit.log:02-02-2022 02:02:50.993 +0100 INFO AuditLogger - Audit:[timestamp=02-02-2022 02:02:50.993, user=splunk-system-user, action=search, info=completed, search_id='subsearch_scheduler__nobody__DS389__RMD58ebfeb8123af6f21_at_1643763600_45904_1643763603.1', has_error_warn=false, fully_completed_search=true, total_run_time=140.00, event_count=3337731, result_count=3337731, available_count=3337731, scan_count=3340592, drop_count=0, exec_time=1643763603, api_et=1643670000.000000000, api_lt=1643756400.000000000, api_index_et=N/A, api_index_lt=N/A, search_et=1643670000.000000000, search_lt=1643756400.000000000, is_realtime=0, savedsearch_name="", search_startup_time="740", is_prjob=false, acceleration_id="A3473993-F504-40E9-8902-DDB9B00D2B1B_DS389_nobody_d9f7ec6dda7fbf4b", app="DS389", provenance="N/A", mode="historical", is_proxied=false, searched_buckets=13, eliminated_buckets=0, considered_events=3341037, total_slices=139625, decompressed_slices=125689, duration.command.search.index=4522, invocations.command.search.index.bucketcache.hit=0, duration.command.search.index.bucketcache.hit=0, invocations.command.search.index.bucketcache.miss=0, duration.command.search.index.bucketcache.miss=0, invocations.command.search.index.bucketcache.error=0, duration.command.search.rawdata=40772, invocations.command.search.rawdata.bucketcache.hit=0, duration.command.search.rawdata.bucketcache.hit=0, invocations.command.search.rawdata.bucketcache.miss=0, duration.command.search.rawdata.bucketcache.miss=0, invocations.command.search.rawdata.bucketcache.error=0, sourcetype_count__ldap:syslog=3337731, roles='admin+power+splunk-system-role+user', search='search index=ldap sourcetype=ldap:syslog instance=servizi op_type=BIND | where dn!="cn=replication manager,cn=config" | fields conn host instance']       Height hours have been passed, and no others logs have been written. The Job Manager still shows the process as "parsing". I see no errors, but in the system I see no process running about this scheduled search. So I suspect that it's not in progress. I don't know how to debug this issue. Do you have any hints?   The Splunk Enterprise version is 8.2.4. Thank you very much Kind Regards Marco
Hi all, I am planning on integrating o365 and Azure cloud services to my Splunk on-prem environment. Now there are several Add-Ons to choose from in Splunkase Microsoft Azure Add on for Splunk S... See more...
Hi all, I am planning on integrating o365 and Azure cloud services to my Splunk on-prem environment. Now there are several Add-Ons to choose from in Splunkase Microsoft Azure Add on for Splunk Splunk Add-on for Microsoft Office 365 Splunk Add-on for Microsoft Cloud Services What is the main difference between these Add-Ons and which should i use? The documentation did not really help. "The Splunk Add-on for Microsoft Office 365 replaces the modular input for the Office 365 Management API within the Splunk Add-on for Microsoft Cloud Services." Is it still possible to collect the o365 logs with the Cloud Services add-on which collects via so called event hubs?   Thank you, O.
Hi I have log like this: 2022-02-01 11:59:59,869 INFO CUS.AbCD-Host-000000 [AppListener] Receive Packet[0000000*]: Cluster[String1.String2]   How can I extract String1 and String2 separately with ... See more...
Hi I have log like this: 2022-02-01 11:59:59,869 INFO CUS.AbCD-Host-000000 [AppListener] Receive Packet[0000000*]: Cluster[String1.String2]   How can I extract String1 and String2 separately with a single rex like this?  Cluster\[(?<GroupREX>\w+\.\w+)   Any idea? Thanks,