All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This is a single server Splunk deployment. I am indexing Duo MFA logs using the official splunk app. In the "Searching and reporting" app, when I use the table command to view that data, each field i... See more...
This is a single server Splunk deployment. I am indexing Duo MFA logs using the official splunk app. In the "Searching and reporting" app, when I use the table command to view that data, each field is a multivalue field with the value duplicated. When I try the same search using the Duo app instead of "Searching and reporting", the fields are extracted only once as expected, not duplicated. For example... When I use the table command on this data in "Searching and reporting":     email user@example.com user@example.com       When I use the table command on this data in the "Duo" app:     email user@example.com       So this problem appears to be limited to the "Searching and reporting" app. But I'm not finding any configuration specific to "Searching and reporting" related to this app/source. For example, there is nothing in SPLUNK/etc/apps/search/local/ props.conf or transforms.conf that would affect this source. The current configuration according to the btool is coming from SPLUNK/etc/apps/duo_splunkapp/default/props.conf and is:     [source::duo] INDEXED_EXTRACTIONS = json KV_MODE = none       I also tried changing the config to this:     INDEXED_EXTRACTIONS = none AUTO_KV_JSON = true KV_MODE = json       but that just resulted in neither the "Searching and reporting" app nor the "Duo" app having extractions for this data. How do I fix this so the "Searching and reporting" app has a single set of extractions and not duplicates?
I am new to Splunk and would really appreciate some guidance or advice on how to do the following: We got different DLP alerts in different consoles, each console with different API capabilities.Th... See more...
I am new to Splunk and would really appreciate some guidance or advice on how to do the following: We got different DLP alerts in different consoles, each console with different API capabilities.The alerts are logged in Microsoft Purview, (a.k.a. the Compliance Center), in Microsoft Defender for Cloud Apps, Microsoft 365 Defender and Splunk.My problem is how do we get the necessary data out of any of these consoles? I'd like to know if Splunk has something tying them all together. I wish to build a search/report that correlates them; linking together certain fields of each source type to create a report and to generate an email that includes alert details and a copy of the content that created the detection.  Please help
i have few orphaned searches, which i need to reassign or disable or delete it. i am not able to do any of these. 1. The orphaned searches which can see in  splunk/app/search/orphaned_scheduled_se... See more...
i have few orphaned searches, which i need to reassign or disable or delete it. i am not able to do any of these. 1. The orphaned searches which can see in  splunk/app/search/orphaned_scheduled_searches.............. here the sharing is in user level. but i am not able to see the same  in  settings>All configurations>Reassign Knowledge objects. when i search the alert name by selecting the orphaned i am not getting any results. 2. When i checked the owner name in internal index it is showing that user has been disabled. Now how can i reassign or disable or delete this searches. is there any chance to do via CLI. please help on this.
All,  I am looking GitHub Enterprise logs as captured by my Syslog-ng server on prem. The logs being sent are JSON ...mostly, but we have some values in the JSON key-value-pairs that are breaking ch... See more...
All,  I am looking GitHub Enterprise logs as captured by my Syslog-ng server on prem. The logs being sent are JSON ...mostly, but we have some values in the JSON key-value-pairs that are breaking characters. The app is not escaping these characters.  SEDCMDing all the these events at the indexer were just overwhelming and don't think this is the correct approach.  I am looking the Splunk Add-on for GitHub and I am seeing it wants Splunk for Syslog Connect container deployed. Before I go and deploy that and learn how it works and what not, how can I check that Splunk has already solved this problem? Just don't want to build that sort of lab out and found out there isn't already some sort of work around in this tool for escaping json chars.    thanks -Daniel 
event is json: {message:AZK} x 10 {message:BCK} x 5 {message:C} x 3   What Im trying to get is a table to count message by values with a modified text Message AZK -  10 Message BCK -  5... See more...
event is json: {message:AZK} x 10 {message:BCK} x 5 {message:C} x 3   What Im trying to get is a table to count message by values with a modified text Message AZK -  10 Message BCK -  5 C - 3   I use this: | eval extended_message= case( match(_raw,"AZK"),"Message AZK", match(_raw,"BCK"),"Message BCK", 1=1, message) | stats count as nombre by extended_message | sort nombre desc | table extended_message, nombre   I can't not get the "C" in the list to be counted the message from the JSON event is not interpreted (i don't know) Thanks for your help  
Dear All. When searching some database log as index=my_db .... I have a field named "statement"  with content as example below: The login packet used to open the connection is structurally in... See more...
Dear All. When searching some database log as index=my_db .... I have a field named "statement"  with content as example below: The login packet used to open the connection is structurally invalid; the connection has been closed. Please contact the vendor of the client library. [CLIENT: 192.20.21.22] I need to create a new field, named IP2, with the IP address as above. In general, the rex command must look for the text between  "[CLIENT: " and "]" Your help is appreciated best regards Altin
Hi, Splunkers,    I have a timechart in dashboard.. l timechart span=1h count by VQ,    then timechart returns a graph with VQ_A, VQ_B, VQ_C with their values. I want to click these VQ_XXX ... See more...
Hi, Splunkers,    I have a timechart in dashboard.. l timechart span=1h count by VQ,    then timechart returns a graph with VQ_A, VQ_B, VQ_C with their values. I want to click these VQ_XXX  as input to reopen the dashboard.  how to pass these "count by" result as input to my dashboard?   actually, in my timechart:    l timechart span=1h count by VQ,    this VQ comes from a token.   there is a chart option below :  <option name="charting. Drilldown">$t_countby$</option>,  if  I put token  t_countby here,  then when I click VQ_A, or, VQ_B, VQ_C in my timechart,   the value passed as input is VQ, which I  selected from droplist with token t_countby,   not VQ_A, or, VQ_B, VQ_C, which I expected to pass as input.     thx in advance.   Kevin  
i currently have a query that returns what I need for a single day.   ( index=microsoftcloud sourcetype="ms:azure:accounts" source="rest*group*") OR (index=microsoftcloud sourcetype="ms:azure:acc... See more...
i currently have a query that returns what I need for a single day.   ( index=microsoftcloud sourcetype="ms:azure:accounts" source="rest*group*") OR (index=microsoftcloud sourcetype="ms:azure:accounts" source="rest*User*") | where match(userPrincipalName,"domain name") or match(userPrincipalName,"domain name") | eventstats count by id | eventstats count(eval((source="rest://MSGraph Group1 Members" OR (source="rest://MSGraph Group 2 Members") or (source="rest://MSGraph Group 3 Members") ))) as total | eventstats count(eval(source="rest://MSGraph CL Users" AND count>1)) as current | dedup total, current | eval perc=round(current*100/total,1)."%" | eval missing=total-current | rename total as "In Scope Users" | rename current as "Current Users" | rename perc as "Percent Compliant" | rename missing as "Missing" | table "In Scope Users", "Current Users", "Missing", "Percent Compliant"   I am trying to make this show me a chart over the previous month that show me the daily result of the posted query.   I have tried many "solutions" from the web, but nothing has worked.  Any help is appreciated
Just started to get logs for our 2019 exchange environment, I'm not a splunk admin and have been advised to use these commands to search all logs in Exchange send/rcv, but seeing what other search qu... See more...
Just started to get logs for our 2019 exchange environment, I'm not a splunk admin and have been advised to use these commands to search all logs in Exchange send/rcv, but seeing what other search queries I can build/use to search by subject, user, sender etc   index=msexchange sourcetype=msexchange:protocollog:smtpsend index=msexchange sourcetype=msexchange:protocollog:smtpreceive
Hello, I have a log that look like this: Here each fields as its own field name, and viewed patient data in registration(XXTEST, ORANGE CRUSH) here is event_name. (Captured group to be used.) 0... See more...
Hello, I have a log that look like this: Here each fields as its own field name, and viewed patient data in registration(XXTEST, ORANGE CRUSH) here is event_name. (Captured group to be used.) 0000|2019-01-07T14:20:12.000000Z|patientid|lastname, firstname|personlastname|M|middelname||PIEIGHT||MRN||Viewed|viewed patient data in registration(XXTEST, ORANGE CRUSH)|00000|| The one in red should be removed as it is sensitive patient data, for example (XXTEST, ORANGE CRUSH) should be removed.  transforms.conf I have. [removedata] REGEX = ^(?:[^\|\n]|){13}(?P<event_name>[^\|]+)([^)]) On my props.conf I have REPORT-removedata= removedata But it is still not working: Do I need to use the field name, or change my regex? Am I applying the proper user of Transform? Thank you,
Hello, When analyzing web traffic logs, at times the url field does not have a http_referrer field.  We are interested in finding out which URL did the original request came from ?  There is looping ... See more...
Hello, When analyzing web traffic logs, at times the url field does not have a http_referrer field.  We are interested in finding out which URL did the original request came from ?  There is looping involved.  THis is similar to the post:  https://community.splunk.com/t5/Getting-Data-In/Loop-through-URL-and-http-referrer-to-find-original-request/m-p/138817#M28507 In the above post, user makes use of a script which I cannot use in my environment.  How to then use the MAP command or any other command to recursively/loop thru the URL field and find out which was the original domain ? For example: index=firewall url =malicious-domain.com Actual flow of traffic: abc.com  >>> bcd.com  >>  No Http_Referrer field  >> malicious-domain.com  ( http_referrer is <empty>)  Expected result: abc.com
Hi, While trying to configure the Instights app for splunk in my search head as I'm trying to send the alerts to test index, it is not populating in the app other than these showed indexes.. Th... See more...
Hi, While trying to configure the Instights app for splunk in my search head as I'm trying to send the alerts to test index, it is not populating in the app other than these showed indexes.. Thanks.  
I have an input of value is like an odometer so it's cumulative. I collect a sample every 15 minutes. If I want to create a timechart that shows the total value of 15 min duration. how would I do tha... See more...
I have an input of value is like an odometer so it's cumulative. I collect a sample every 15 minutes. If I want to create a timechart that shows the total value of 15 min duration. how would I do that? See example below. 1/17/2023 0:01:00 value 6 1/17/2023 0:02:00 value 6 1/17/2023 0:03:00 value 6 1/17/2023 0:09:00 value 7 1/17/2023 0:10:00 value 6 1/17/2023 0:11:00 value 7 1/17/2023 0:12:00 value 8 1/17/2023 0:15:00 value 8 from 1 minute to 15 minute total value is 54 1/17/2023 0:16:00 value 5 1/17/2023 0:17:00 value 8 1/17/2023 0:18:00 value 5 1/17/2023 0:29:00 value 7 1/17/2023 0:30:00 value 5 from 16 minute to 30 minute total value is 30
Hi all, We have an issue with our environment (all running 8.2.8 currently on Windows Server 2016 Standard). We have a multisite Indexer cluster consisting of 2 Sites with 2 Indexers in each site an... See more...
Hi all, We have an issue with our environment (all running 8.2.8 currently on Windows Server 2016 Standard). We have a multisite Indexer cluster consisting of 2 Sites with 2 Indexers in each site and a separate 2 node Searchhead cluster, 1 Indexer Master node. Replication is working as expected and when manually taking nodes offline, search and data durability are effected as desired (for ex. if you take a whole site or a single node in a site offline, everything is still searchable). When a configuration bundle is deployed via the Master node, which requires a restart, all indexers in both sites will restart at the same time interrupting all searches. The following values are present in server.conf (using btool) on the Master node: [clustering] percent_peers_to_restart = 10 restart_timeout = 60 rolling_restart = restart rolling_restart_condition = batch_adding replication_factor = 2 site_replication_factor = origin:1,total:2 site_search_factor = origin:1,total:2 On top of that and also unpleasant is that in many cases for ex. when appyling changes to props.conf for existing stanzas via the Master node, the indexers will restart although bundle validation on the Master returned that a restart is not required. According to forum posts the issue should have been fixed in the 6.5.2 release. This environment however was base installed with 7.x so it cannot be an issue which would have been carried along through upgrades. Any thoughts appreciated. Many thanks and regards    
Hello, I am new to Splunk and have testing purpose. I have installed Splunk within 30 days and installed the ITE content pack recently. I got below error when searching. I found that the document... See more...
Hello, I am new to Splunk and have testing purpose. I have installed Splunk within 30 days and installed the ITE content pack recently. I got below error when searching. I found that the document said that searching will be disabled when there are five or more alerts. But I don't know why my usage is full? Error in Search: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK. Licensing Alert: slave had no matching license pool for the data it indexed Licensing Alert from CLI: 4565f4ac328ca2cebf5d54342bd63c99 category:orphan_peer create_time:1673625600 description:slave had no matching license pool for the data it indexed peer_id:FDE1BB3F-01D3-4B27-B0FB-19AAE1CD27A0 severity:WARN slave_id:FDE1BB3F-01D3-4B27-B0FB-19AAE1CD27A0 Appreciate for any help.
Hi Team, I have deleted few old data using "delete" command in Splunk. But i like to know whether any way to check if the data events are deleted from the indexer buckets.   Thanks in advance!
Hi Community, how to route data with props and transforms over multiple HF? Source A to Data Collector > IDX Cluster A | (Data Copy A) | |---> Source B to Data Collector > IDX Cluster A/B ... See more...
Hi Community, how to route data with props and transforms over multiple HF? Source A to Data Collector > IDX Cluster A | (Data Copy A) | |---> Source B to Data Collector > IDX Cluster A/B Currently, the routing only works directly to IDX Cluster A/B, but not via Source B HF Please Help - Markus
Hi,   I have an application(test.app) which invokes multiple downstream application apis(profile, payments etc) and we log elapsed time of every downstream call as an element of Json Array. Is it... See more...
Hi,   I have an application(test.app) which invokes multiple downstream application apis(profile, payments etc) and we log elapsed time of every downstream call as an element of Json Array. Is it possible to plot timechart on the p99(elapsed) time of each downstream application call separately.   Sample log :      { ...... appName : test.app downstreamStats : [ { ... pathname : profile, elapsed: 250, ... }, { ... pathname : payments, elapsed: 850, ... } ] ...... }       I want to plot timechart of the above logs with p99 of elapsed time BY pathname
Any suggestions on how to rename fields and keep those fields in their stated table order. I have a bunch of fields that are attributes that are named is_XXX. I want all those fields to be on the r... See more...
Any suggestions on how to rename fields and keep those fields in their stated table order. I have a bunch of fields that are attributes that are named is_XXX. I want all those fields to be on the right hand side of the table, so if I do     <search> | foreach is_* [ eval "zz_<<MATCHSTR>>"=if(<<FIELD>>=1,"","")] | fields - is_* | table entity entity_type *     it works nicely and puts the first two named fields as the first two columns, then other fields then all the zz_* fields.    However, as soon as I add     | rename zz_* as *     it changes the order and sorts all the columns (apart from the first named two) into alphabetical order. Any specifically named fields I add after entity_type persist the column order but all fields output as a result of the wildcard lose their order after the rename.    
Hi, i have 2 hosts and 2 sources, previously i was getting data from 2 sources, but now we have teardown and resdeployed existing servers. Due to that the host ips are same but host names got chang... See more...
Hi, i have 2 hosts and 2 sources, previously i was getting data from 2 sources, but now we have teardown and resdeployed existing servers. Due to that the host ips are same but host names got changed. but now i am getting data from 2 hosts but from only one source not getting data from 2nd source. how to troubleshoot this issue??