All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have created an app in Azure given the permissions to the Office 365 management activity API and also created the Microsoft Office 365 Reporting Add-on in Splunk. The results when searching is... See more...
Hi, I have created an app in Azure given the permissions to the Office 365 management activity API and also created the Microsoft Office 365 Reporting Add-on in Splunk. The results when searching is not covering the fields i want. I want to get the subject of the email which Defender for O365 has triggered an alert on. Is the API sending the data? If yes, where is the fields stuck? Br, Robar
So far, this is one of the only ways i've figured out how to change the onclick of the trellis single value view so that the entire block is clickable (like in ITSI) on the dashboard, i create a sin... See more...
So far, this is one of the only ways i've figured out how to change the onclick of the trellis single value view so that the entire block is clickable (like in ITSI) on the dashboard, i create a single value search, with trellis view, and colors inverted so the entire square shows the color status, AND finally I added script="mycode.js" to the form tag, and  id="main" to it's search tag in the xml: <form theme="dark" script="mycode.js"> <search id="main">   I also added styling to force the labels down into the blocks: <panel> <html> <p/> <style> .facet-label {top:30% !important;} .facet-label {z-index:1 !important;} .facet-label {font-size:20px !important;} .facet-label {font-size:1.125vw !important;} .facet-label {display:inline !important;} text.single-result {z-index:1 !important;} </style> </html> </panel> Then i placed the js here: /opt/splunk/etc/apps/search/appserver/static/mycode.js mycode.js: require(['splunkjs/mvc','splunkjs/mvc/simplexml/ready!','splunkjs/ready!'], function(mvc){ var main = mvc.Components.get('main'); main.on("search:progress", function() { $('.svg-container').on("click", function (e) { $($('a.single-drilldown', $(e.currentTarget))[0]).click(); }); }); main.on("change", function() { $('.svg-container').on("click", function (e) { $($('a.single-drilldown', $(e.currentTarget))[0]).click(); }); }); main.on("data", function() { $('.svg-container').on("click", function (e) { $($('a.single-drilldown', $(e.currentTarget))[0]).click(); }); }); }); Final result: (each block is clickable and leads to the intended drilldown) Now, the question is... why cant I just cram in the JS like below: $('.svg-container').on("click", function (e) { $($('a.single-drilldown', $(e.currentTarget))[0]).click(); }); why am i being forced to use mvc at all?
hi splunk tells me that "the arguments to the case function are invalid" what is wrong please?   | eval site=case(site=="SAINT MAXIMIN LA SAINTE BA", "SAINT MAXIMIN LA SAINTE BAUME", 1)    
I have a speedtest from ookla that runs every 30 min and returns results from 3 servers.  2022-02-02T08:00:26.000-0300,94.02204,94.28108,NETSEG FIBRA 2022-02-02T08:00:51.000-0300,304.676784,153.2723... See more...
I have a speedtest from ookla that runs every 30 min and returns results from 3 servers.  2022-02-02T08:00:26.000-0300,94.02204,94.28108,NETSEG FIBRA 2022-02-02T08:00:51.000-0300,304.676784,153.272304,Oi 2022-02-02T08:01:17.000-0300,303.109696,151.48468,LinQ Telecom 2022-02-02T08:30:25.000-0300,94.107144,93.58704,NETSEG FIBRA 2022-02-02T08:30:49.000-0300,304.835216,153.044024,Oi 2022-02-02T08:31:16.000-0300,275.610992,153.0804,LinQ Telecom Here is my search: sourcetype="SpeedTest" | convert num(download.bandwidth) as D_bnd | convert num(upload.bandwidth) as U_bnd | eval dmbs=D_bnd*8/1000000 | eval umbs=U_bnd*8/1000000 | table _time dmbs umbs This is the basic result, I don't want to to an avg(dmbs) so timechart wont work that I am aware of.  What I would like is to do like a span=30m to join these while showing a label for the server.name for each bar. Is this possible or do I have to make three chart searches then combine somehow? Expected Result I am trying to make is like a time chart avg(dmbs) span=15 but with each server.name in series so I can overlay them or use the trellis layout and aggregate them on the server.name while still showing the up/down speed. i don't care if the up/down is side by side or stacked. The span will eliminate the gap between the times (30min). I did this on one server.name and works fine but want to combine all three server.name in one chart in different x data points.     
Hi dear community! I'm trying to build the dashboard using records in two states STATE1 and STATE2. I'm logging state changes so in logs I have these lines: RECORD_<record_id>_CHANGED_STATE_TO STAT... See more...
Hi dear community! I'm trying to build the dashboard using records in two states STATE1 and STATE2. I'm logging state changes so in logs I have these lines: RECORD_<record_id>_CHANGED_STATE_TO STATE1 RECORD_<record_id>_CHANGED_STATE_TO STATE2 To get all the records in STATE2 I use this and it works well:    index=... source=... "CHANGED_STATUS_TO STATE2" | eval action="STATE2" | timechart count by action span=200m     To get the records in STATE1 I need to grep all the records "CHANGE_STATUS_TO STATE1" but filter it removing all the records that are already in STATE2 I was trying to extract record_id and use subsearch like this but seems I'm doing something wrong:   index=... source=... NOT record_id+"CHANGED_STATUS_TO STATE2" [search index=... source=... | rex "(?<record_id>\d+)_CHANGED_STATUS_TO STATE1" | fields record_id] | eval action="STATE1" | timechart count by action span=200m     Could you please help me?
Hi! We deployed 2 new servers on Linux and cteated from them SHcluster. Captain: dynamic_captain : 1 elected_captain : Wed Feb 2 12:03:47 2022 id : 4F8CBE82-6302-4582-9FB7-48AC452AA43D initiali... See more...
Hi! We deployed 2 new servers on Linux and cteated from them SHcluster. Captain: dynamic_captain : 1 elected_captain : Wed Feb 2 12:03:47 2022 id : 4F8CBE82-6302-4582-9FB7-48AC452AA43D initialized_flag : 1 label : sphome5 mgmt_uri : https://%%%%%%%%% min_peers_joined_flag : 1 rolling_restart_flag : 0 service_ready_flag : 1 Members: sphome5 label : sphome5 mgmt_uri : https://%%%%%%%%% mgmt_uri_alias : https://%%%%%%%%% status : Up sphome6 label : sphome6 last_conf_replication : Wed Feb 2 12:07:49 2022 mgmt_uri : https://%%%%%%%%% mgmt_uri_alias : https://%%%%%%%%% status : Up After creating the SHcluster, it is assigned a long ID (4F8CBE82-6302-4582-9FB7-48AC452AA43D). At now we need add older indexers via Search peers=>new search peers etc. Older indexers deployed on Windows Servers 2016 and 2019 When I try add new search peer after create i see that replication status is Failed. In file splunkd.log on indexer i see following error: BundleDataProcessor - File length is greater than 260, File creation may fail for C:\Program Files\Splunk\var\run\searchpeers\4F8CBE82-6302-4582-9FB7-48AC452AA43D-1643781827.82278d5fd22b73a1.tmp\apps\TA-alfa-telegram--connector\bin\ta_alfa_telegram_connector\aob_py2\cloudconnectlib\splunktacollectorlib\data_collection\ta_checkpoint_manager.py How we can change ID SHcluster (6302-4582-9FB7-48AC452AA43D) to shorter? And additional if is it inposable Could you please give any recomendation how we can solution this problem?
I have One primary index which contains 30 days logs, but i want from one year for this purpose i created One more splunk summary index where i can copy One month Data (Jan) ,after one month again on... See more...
I have One primary index which contains 30 days logs, but i want from one year for this purpose i created One more splunk summary index where i can copy One month Data (Jan) ,after one month again one month data (Feb) likewise... So how can i copy one month data from One index to another 
Hello @Anonymous  Please help me out here. I was trying to extract a field "faultDescription". but the logs have different format for each event.  event 1: "faultDescription" : "Backend system err... See more...
Hello @Anonymous  Please help me out here. I was trying to extract a field "faultDescription". but the logs have different format for each event.  event 1: "faultDescription" : "Backend system error has occurred.", event 2 : <soafault:faultDescription>SKU is not provided</soafault:faultDescription> event 3: <soafault:faultDescription>SKU is not provided</soafault:faultDescription>   i have tried below rex command, and it works for event 1 and 3. But how to write a command which will extract fault description in all 3 formats. There is space between faultdescription" and : hence not able to write a expression which will involve all 3 events   i have tried : | rex field=_raw "faultDescription+.\s:\s(?P<description>.+)," 
Hello,   I would like to summary index some data from heaving searches. The savedsearch is     [Summary - servizi BIND by Clients] action.email.useNSSubject = 1 action.summary_index = 1 action.s... See more...
Hello,   I would like to summary index some data from heaving searches. The savedsearch is     [Summary - servizi BIND by Clients] action.email.useNSSubject = 1 action.summary_index = 1 action.summary_index._name = summary_ldap action.summary_index.instance = servizi action.summary_index.type = client alert.track = 0 cron_schedule = 0 2 * * * description = BIND by clients on servizi dispatch.earliest_time = -1d@d dispatch.latest_time = @d display.general.timeRangePicker.show = 0 enableSched = 1 schedule_window = 60 search = index=ldap sourcetype=ldap:syslog instance=servizi\ [search index=ldap sourcetype=ldap:syslog instance=servizi op_type=BIND\ | where dn!="cn=replication manager,cn=config"\ | fields conn host instance]\ | where in(op_type,"connection","closed","BIND")\ | transaction maxopenevents=-1 maxopentxn=-1 maxevents=-1 mvlist=t startswith=client_ip=* conn host instance\ | lookup dnslookup clientip as client_ip OUTPUT clienthost as client\ | eval client_ip=mvindex(client_ip,mvfind(client_ip, "^\d+")), client=if(isnull(client),client_ip,client), dn=lower(dn)\ | mvexpand dn\ | where dn!="null"\ | sichart count(dn) over dn by client       The subsearch returns million of results, so I have already increased maxout (and maxresultrows in [searchresults]. The problem is that this search never ends. In the "Job Manager" the search is always in parsing mode.  This search is still running and is approximately 0% complete. (SID: scheduler__nobody__DS389__RMD58ebfeb8123af6f21_at_1643763600_45904)  The search log terminated with: 02-02-2022 02:02:24.087 INFO SearchOperator:kv [182237 searchOrchestrator] - Extractor stats: name=dnexplicitanom, probes=225, total_time_ms=5, max_time_ms=1 02-02-2022 02:02:24.091 INFO ISearchOperator [182237 searchOrchestrator] - 0x7f3315be3400 PREAD_HISTOGRAM: usec_1_8=6754 usec_8_64=7150 usec_64_512=76 usec_512_4096=25 usec_4096_32768=28 usec_32768_262144=1 usec_262144_INF=0 02-02-2022 02:02:24.092 INFO SearchStatusEnforcer [182237 searchOrchestrator] - SearchStatusEnforcer is already terminated   It's all INFO rows, no WARN, no ERROR. Looking at the log I see that the subsearch terminated well:   audit.log:02-02-2022 02:02:50.993 +0100 INFO AuditLogger - Audit:[timestamp=02-02-2022 02:02:50.993, user=splunk-system-user, action=search, info=completed, search_id='subsearch_scheduler__nobody__DS389__RMD58ebfeb8123af6f21_at_1643763600_45904_1643763603.1', has_error_warn=false, fully_completed_search=true, total_run_time=140.00, event_count=3337731, result_count=3337731, available_count=3337731, scan_count=3340592, drop_count=0, exec_time=1643763603, api_et=1643670000.000000000, api_lt=1643756400.000000000, api_index_et=N/A, api_index_lt=N/A, search_et=1643670000.000000000, search_lt=1643756400.000000000, is_realtime=0, savedsearch_name="", search_startup_time="740", is_prjob=false, acceleration_id="A3473993-F504-40E9-8902-DDB9B00D2B1B_DS389_nobody_d9f7ec6dda7fbf4b", app="DS389", provenance="N/A", mode="historical", is_proxied=false, searched_buckets=13, eliminated_buckets=0, considered_events=3341037, total_slices=139625, decompressed_slices=125689, duration.command.search.index=4522, invocations.command.search.index.bucketcache.hit=0, duration.command.search.index.bucketcache.hit=0, invocations.command.search.index.bucketcache.miss=0, duration.command.search.index.bucketcache.miss=0, invocations.command.search.index.bucketcache.error=0, duration.command.search.rawdata=40772, invocations.command.search.rawdata.bucketcache.hit=0, duration.command.search.rawdata.bucketcache.hit=0, invocations.command.search.rawdata.bucketcache.miss=0, duration.command.search.rawdata.bucketcache.miss=0, invocations.command.search.rawdata.bucketcache.error=0, sourcetype_count__ldap:syslog=3337731, roles='admin+power+splunk-system-role+user', search='search index=ldap sourcetype=ldap:syslog instance=servizi op_type=BIND | where dn!="cn=replication manager,cn=config" | fields conn host instance']       Height hours have been passed, and no others logs have been written. The Job Manager still shows the process as "parsing". I see no errors, but in the system I see no process running about this scheduled search. So I suspect that it's not in progress. I don't know how to debug this issue. Do you have any hints?   The Splunk Enterprise version is 8.2.4. Thank you very much Kind Regards Marco
Hi all, I am planning on integrating o365 and Azure cloud services to my Splunk on-prem environment. Now there are several Add-Ons to choose from in Splunkase Microsoft Azure Add on for Splunk S... See more...
Hi all, I am planning on integrating o365 and Azure cloud services to my Splunk on-prem environment. Now there are several Add-Ons to choose from in Splunkase Microsoft Azure Add on for Splunk Splunk Add-on for Microsoft Office 365 Splunk Add-on for Microsoft Cloud Services What is the main difference between these Add-Ons and which should i use? The documentation did not really help. "The Splunk Add-on for Microsoft Office 365 replaces the modular input for the Office 365 Management API within the Splunk Add-on for Microsoft Cloud Services." Is it still possible to collect the o365 logs with the Cloud Services add-on which collects via so called event hubs?   Thank you, O.
Hi I have log like this: 2022-02-01 11:59:59,869 INFO CUS.AbCD-Host-000000 [AppListener] Receive Packet[0000000*]: Cluster[String1.String2]   How can I extract String1 and String2 separately with ... See more...
Hi I have log like this: 2022-02-01 11:59:59,869 INFO CUS.AbCD-Host-000000 [AppListener] Receive Packet[0000000*]: Cluster[String1.String2]   How can I extract String1 and String2 separately with a single rex like this?  Cluster\[(?<GroupREX>\w+\.\w+)   Any idea? Thanks,
Hi I have two result like this how can I create sankey diagram for it?   SOURCE                                       count Server1.Mainserver               629 Server2.Mainserver              2... See more...
Hi I have two result like this how can I create sankey diagram for it?   SOURCE                                       count Server1.Mainserver               629 Server2.Mainserver              2539 Server3.Mainserver             29668 Server_Name4.Mainserver 6470 Server5.Mainserver             114547 Server6.Mainserver             2 Server7.Mainserver             18 Server8.Mainserver             11 Server9.Mainserver             27 Server10.Mainserver             20375 Server11.Mainserver             698 Server12.Mainserver             61 Server13.Mainserver             10014 Server14.Mainserver             160672 Server15.Mainserver             16 Server16.Mainserver             6643 Server17.Mainserver             4780   TARGET                                          count Mainserver.Server1             624 Mainserver.Server2             2611 Mainserver.Server3             29962 Mainserver.Server_Name4 6503 Mainserver.Server5             115897 Mainserver.Server7             25 Mainserver.Server8             15 Mainserver.Server9             22 Mainserver.Server10             20586 Mainserver.Server11             640 Mainserver.Server12             61 Mainserver.Server13             9899 Mainserver.Server14             158477 Mainserver.Server15             7 Mainserver.Server16             6615 Mainserver.Server17             4777   something like this, Mainserver show in center   Any Idea? Thanks,  
hi I use the search below in order to display the number of events corresponding to my main search on a cluster map There is a gap between the results displayed on my map and the results of the mai... See more...
hi I use the search below in order to display the number of events corresponding to my main search on a cluster map There is a gap between the results displayed on my map and the results of the main search I have identified a first problem Some sites between the lookup and splunk are a little bit differents For example, I have a site calle "LA BA" in Splunk and "LA BAUME" in the csv So what I have to do that the sites match well?   index=toto sourcetype=tutu | stats dc(id) as nbincid by site | where isnotnull(site) | join type=left site [| inputlookup Bp.csv | rename siteName as site | fields site latitude longitude ] | table site nbincid latitude longitude | geostats latfield=latitude longfield=longitude globallimit=0 values(nbincid)    
In timechart command used cont=false and in table statatics its not showing data on empty values but in bar graph . the empty/not present days showing gap in the graph | timechart cont=false span=d ... See more...
In timechart command used cont=false and in table statatics its not showing data on empty values but in bar graph . the empty/not present days showing gap in the graph | timechart cont=false span=d sum(Negotiate) as "Negotiate", sum(Submitted) as "Submitted", sum(Draft) as "Draft",sum(Agreed) as "Agreed" sum(PartlyAgreed) as "Partly Agreed", sum(Empty) as "Empty", sum(Rejected) as "Rejected",sum(Obsolete) as "Obsolete", sum(NA) as "N/A", values(Total) as Total  
Hello, I am new to Splunk and working on getting SC4S setup correctly.  My question is where do I setup the SC4S server? I am the splunk admin and need to help other team to onboard the syslog data.
Hi TEam, Please let me know how the RUM license is calculated, I need for one of the evaluation. Thanks Kamal
Hello, I just recently restarted my splunk enterprise instance in order to add an app and once it was back up, i noticed that one of the health checks was failing.  Also no new logs were showin... See more...
Hello, I just recently restarted my splunk enterprise instance in order to add an app and once it was back up, i noticed that one of the health checks was failing.  Also no new logs were showing up in the search.  I looked at the monitoring console and noticed the parsing queue was full. I also checked the metrics.log and saw some of the queues were full. If I'm understanding the data pipeline hierarchy correctly, it's the parsing queue that's actually blocked and causing the other queues to be blocked.  I also checked the splunkd.log and didn't really anything that seemed related. There were some SSL errors which didn't seem related. And this other error:   ERROR HttpInputDataHandler - Failed processing http input, token name=kube, channel=n/a, source_IP=172.17.8.66, reply=9, events_processed=4, http_input_body_size=7256, parsing_err="Server is busy"   but that seems to be a result of the full queue.  I looked into my resource usage from the monitoring console and top tool and neither cpu or mem go higher than 50% utilization. I also restarted splunk multiple times but the queue always goes to 100% instantly. I did notice a warning on startup: Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features. However, I didn't make any changes to props.conf and everything was working before I restarted the first time so I assume this is not related. Not sure what else to try. Any help would be greatly appreciated!
Hi,   I'd like the users to not be able to create any new dashboards either from the search bar or the "Create New Dashboard" button on the dashboard's page. Only the admin users should be allowed ... See more...
Hi,   I'd like the users to not be able to create any new dashboards either from the search bar or the "Create New Dashboard" button on the dashboard's page. Only the admin users should be allowed access to create dashboards. However, the users should be able to run searches on the search bar.  Can anyone help me with this?   Thanks, Megha
Let's say I have a CSV input with the following columns:  _raw,user,src_ip The _raw event is:  "Accepted public key for user $user$ from $src_ip$" Is there a way to replace $user$ and $src_ip$ ... See more...
Let's say I have a CSV input with the following columns:  _raw,user,src_ip The _raw event is:  "Accepted public key for user $user$ from $src_ip$" Is there a way to replace $user$ and $src_ip$ in _raw with the values of the corresponding fields? I tried using "foreach" and "rex" in sedcmd mode, but it doesn't look like rex understands <<FIELD>> and '<<FIELD>>'.   Is there another way to do this?
Hello, We are excited to announce the preview of Splunk Incident Intelligence. What is Splunk Incident Intelligence?  Splunk Incident Intelligence is an effort to develop a solution that will provi... See more...
Hello, We are excited to announce the preview of Splunk Incident Intelligence. What is Splunk Incident Intelligence?  Splunk Incident Intelligence is an effort to develop a solution that will provide an optimal user-experience for enterprises to manage their incident response process for applications and  infrastructure as they navigate their digital transformation and cloud migration initiatives. Want to request more features? Add your ideas and vote on other ideas at   Splunk Incident Intelligence Ideas Portal  Please reply to this thread for any questions or get extra help!