All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When I add all the details required on Splunk add on for office 365, I click add and then get the following error: Screenshot is attached   Regards, Faisal  
Hi, Could you please let me know How to split data to multiple indexes on the same indexer (index1,index2) from one input source from one Heavyforwarder   I tried with the following configuration ... See more...
Hi, Could you please let me know How to split data to multiple indexes on the same indexer (index1,index2) from one input source from one Heavyforwarder   I tried with the following configuration but all the data going to once index that is defined in inputs.conf. If I remove index from inputs.conf all the events are going to main index.     Thank you in ad   Here my configuration and data:   INPUTS.CONF ====== [monitor:///opt/splunk/var/log/tesData] disabled = false host = heaveforwarder1   PROPS.CONF =========== [source::///opt/splunk/var/log/tesData] TRANSFORMS-routing=vendorData,secureData   TRANSFORMS.conf ========== [vendorData] REGEX=5617605039838520 DEST_KEY=_MetaData:Index FORMAT=index1 [secureData] REGEX=6794850084423218 DEST_KEY=_MetaData:Index FORMAT=index2  testdata: [08/June/2022:18:23:07] VendorID=5038 Code=C AcctID=5617605039838520 [08/June/2022:18:23:22] VendorID=9109 Code=A AcctID=6794850084423218      
Windows based DNS, does anyone know of a few search examples i could utilize to look up DNS entries Like a A record please?
  index=_internal source=*metrics.log | eval MB=round(kb/1024,2) | search group="per_sourcetype_thruput" | stats sum(MB) by series | eval sourcetype=lower(series) | table index sourcetype "sum(... See more...
  index=_internal source=*metrics.log | eval MB=round(kb/1024,2) | search group="per_sourcetype_thruput" | stats sum(MB) by series | eval sourcetype=lower(series) | table index sourcetype "sum(MB)" | append [| tstats latest(_time) as latest where index=* earliest=-24h by sourcetype |eval LastReceivedEventTime = strftime(latest,"%c") |table index, sourcetype LastReceivedEventTime | eval sourcetype=lower(sourcetype)] | stats values(*) as * by sourcetype | where LastReceivedEventTime != ""     Above query giving me sourtype, latest time stamp and sum(MB), but unable to get index, can someone please help
Hey all, I'm looking for some advice. We currently have multiple ASAs which are sending logs to rsyslog. The logs are stored in folders based on the hostname. I currently have the universal forward... See more...
Hey all, I'm looking for some advice. We currently have multiple ASAs which are sending logs to rsyslog. The logs are stored in folders based on the hostname. I currently have the universal forwarder monitoring the parent folder. When it comes to using Splunk addons such as "Firegen Log Analyzer for Cisco ASA". It's asking to specify an index. Do I need to compile all the logs from the ASAs into a single directory and then create an index for it? Both rsyslog and splunk on are linux (ubuntu) hosts. Any help would be appreciated. Thanks, Will      
Hello everyone. I have set up a cluster of 3 search heads, I have the Serach Head 1 configured as captain, but it turns out that there are times that I do not see events in that same SH1 and SH2, t... See more...
Hello everyone. I have set up a cluster of 3 search heads, I have the Serach Head 1 configured as captain, but it turns out that there are times that I do not see events in that same SH1 and SH2, this causes alerts that I have configured in my SH1 to activate, since they do not events are displayed on the SH1. What I have to do is change how SH3 is populated and the display of events is restored, temporarily solving the problem. After a while I find out that SH1 takes the role of captain again and again I can't view events on SH1. Why could it be happening? Regards.  
I am looking to define globally all of the 'knowledge objects' within a search head. Where is the URL found within Settings? Or is there a different search that would provide the URL? I want to im... See more...
I am looking to define globally all of the 'knowledge objects' within a search head. Where is the URL found within Settings? Or is there a different search that would provide the URL? I want to implement the following search, but need the URL and have not found it as of yet. | rest <URL goes here> splunk_server=local count=0 | rename eai:* as *, acl.* as * | eval updated=strptime(updated,"%Y-%m-%dT%H:%M:%S%Z"), updated=if(isnull(updated),"Never",strftime(updated,"%d %b %Y")) | sort type | stats list(title) as title, list(type) as type, list(orphaned) as orphaned, list(sharing) as sharing, list(owner) as owner, list(updated) as updated by app  
When the syslog daemon writes to the syslog file, what is the time stamp it writes? is it the host date/time or the event date/time? We quite often use the following in the syslog config -  tem... See more...
When the syslog daemon writes to the syslog file, what is the time stamp it writes? is it the host date/time or the event date/time? We quite often use the following in the syslog config -  template("${DATE} ${MSGHDR}${MSG}\n");      
I have connected my blob storage to splunk the files are uploading to the index but the csv format is not working, each line is a different result with no header extraction to every result I have... See more...
I have connected my blob storage to splunk the files are uploading to the index but the csv format is not working, each line is a different result with no header extraction to every result I have tried Sourcetype: mscs:storage:blob:csv and csv but it didn't work when I try to upload the csv as a data input than splunk recognize the csv format and extract the headers correctly. what am I missing? are there more changes needed to extract the fields correctly?
Hello all,   is there any option that may allow me to add a button in a dashboard, to export a dasboard to PDF, in landscape mode, without having to change system settings?   This operation s... See more...
Hello all,   is there any option that may allow me to add a button in a dashboard, to export a dasboard to PDF, in landscape mode, without having to change system settings?   This operation should specifically be applied just to this specific dashboard.   Any idea?
Hi, I just upgraded splunk to 9.0.0 and realized the log ~/var/log/splunk/splunkd.log started to get populated with messages like 06-14-2022 16:41:00.924 +0300 ERROR script [2047436 SchedulerThre... See more...
Hi, I just upgraded splunk to 9.0.0 and realized the log ~/var/log/splunk/splunkd.log started to get populated with messages like 06-14-2022 16:41:00.924 +0300 ERROR script [2047436 SchedulerThread] - Script execution failed for external search command 'runshellscript'. 06-14-2022 16:41:00.906 +0300 WARN SearchScheduler [2047436 SchedulerThread] - addRequiredFields: SearchProcessorException is ignored, sid=AlertActionsRequredFields_1655214060.1451, error=Error in 'script': Script execution failed for external search command 'runshellscript'. The above comes to the logs regardless of whether the alert has been fired or not, and we rely quite heavily on running external scripts to make external systems aware of problems. I thought, now all our script bindings to alerts are broken and we must do a rollback. However, I tested and the scripts were executed nicely. My question is, what has changed here, if anything? I would like to get rid of those messages cluttering the logs in vain. An the other things is, if something else really has changes, what should I do to make splunk happy about the scripts in alerts? I am looking for something else than "Please write a Python script to do the job." Any clues?
Our users have discovered that they can add data to indexes.  This could lead to a user accidently polluting a production index.  I searched the Splunk documentation and the Internet but was unable t... See more...
Our users have discovered that they can add data to indexes.  This could lead to a user accidently polluting a production index.  I searched the Splunk documentation and the Internet but was unable to find a solution.   Does anyone know how we can restrict write-access to indexes to the sc_admin role and allow read access for everyone else?
Hi Splunkers, I'm on an addon creation task, Glassfish in particular and, like other times I faced tese kind or request, I'm configuring the props.conf file. In this configuration I'm facing the ... See more...
Hi Splunkers, I'm on an addon creation task, Glassfish in particular and, like other times I faced tese kind or request, I'm configuring the props.conf file. In this configuration I'm facing the following issue: I know that events starts with two kind of character sequences: [#| Date in format month (3 letters) and day, so for example Jun 07 So, in BREAK_ONLY_BEFORE, i putted the following regex:      [\[\#\|] | [\w{3}\s\d{2}]      and it works fine. A problem rise in the second case: when this events are present, they have a structure with many carriage return. Here a log sample:     Jun 07, 2022 8:29:52 PM <some_path_here>info INFO: JVM invocation command line: -XX:+UnlockDiagnosticVMOptions -XX:MaxPermSize=<size> -XX:PermSize=<size> -XX:NewRatio=<size> -Xms<size> -Xmx4096m <other lines that starts always with - symbol>      In such case, the default event line breaking split every info in this events in a different events. So, I set      SHOULD_LINEMERGE=1     but I have still problems; even with this configuration, the events are not properly merged. What I got are 3 different events splitted in such a way::     Jun 07, 2022 8:29:52 PM <some_path_here>info     first part of info starting with - symbol, so:     INFO: JVM invocation command line: -XX:+UnlockDiagnosticVMOptions -XX:MaxPermSize=<size> -XX:PermSize=<size> -XX:NewRatio=<size> -Xms<size> -Xmx4096m     remaining part of info starting with - symbol, so:     -Djavax.net.<remaining path> -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=<value> -DANTLR_USE_DIRECT_CLASS_LOADING=<value>     To fix this, I tried to use:     MUST_NOT_BREAK_AFTER=[\r\n]+     but it does not work. The event is still divided in the above 3 different parts. How can I fix it?
Hi guys. I´m trying to use the configuration item field on Service Now integration in order to pass to SNOW a dinamic field, so I´m trying to use something like this $System_name.result$ but, didn´t ... See more...
Hi guys. I´m trying to use the configuration item field on Service Now integration in order to pass to SNOW a dinamic field, so I´m trying to use something like this $System_name.result$ but, didn´t work. Do you know if this field accept this format ? Thank you. Clecimar
Looking to improve your security posture and address our June 2022 security advisories? You have come to the right place, and here are some helpful resources for you! Product Security Page - Subs... See more...
Looking to improve your security posture and address our June 2022 security advisories? You have come to the right place, and here are some helpful resources for you! Product Security Page - Subscribe to get notified of all recent advisories  Improve Your Security Posture Tech Talk - Technical webinar focusing on our 9.0 security features and June 2022 security advisories Splunk Enterprise Upgrade Best Practices - Lantern page with general tips for upgrades Customer FAQ - Common questions on our recent security posture initiative  Documentation - all the juicy details on how to take action  Still have questions? *If related to these advisories, you can comment below! *If related to securing your Splunk instance, you can post a new question on this board! *If specific to Splunk Enterprise or Splunk Cloud Platform, you can post to those boards!
Hi, Say I have the following table: Name  2022-06-07 10:01:14 2022-06-07 22:01:13 2022-06-08 10:01:11 2022-06-08 22:01:25 2022-06-09 10:01:22 2022-06-09 22:00:59 2022-06-10 10... See more...
Hi, Say I have the following table: Name  2022-06-07 10:01:14 2022-06-07 22:01:13 2022-06-08 10:01:11 2022-06-08 22:01:25 2022-06-09 10:01:22 2022-06-09 22:00:59 2022-06-10 10:01:28 a 301 300 302 303 301 400 412 b 200 220 235 238 208 300 302 Can I color a cell based on the increment rate from the previous value? for instance- if the value increased by 10%, it will be yellow, 20% would be Orange and so on.  I'm looking for a solution based on simple xml where no additional files are needed. Thanks.
Hi Community,   I have a dashboard that gives me an overview of the details. When I click on one of the rows it drives me to a different dashboard which takes time from this dashboard and ... See more...
Hi Community,   I have a dashboard that gives me an overview of the details. When I click on one of the rows it drives me to a different dashboard which takes time from this dashboard and performs a granular search within time limits based on parent ID. This search performs a search on a panel that shows no data at all. When I try to look at the SPL of the empty dashboard, I realise that the SPL does search on milliseconds. This search is within the range of 1 second. This search is driven by a data model acceleration which can accelerate only for seconds. So if the change the time range for more than a second I get the desired results.   To fix this issue, the only option I can think of is reconstructing the SPL without data models but that will slow down the search or manipulate the time range so that I can get the data. Is there some other option which I can use to get the desired results? Thanks in advance.   Regards, Pravin      
Hello, First of all, sorry for my lack of knowledge if my question looks silly.   I have a datasource providing events as follows : State Start Timestamp / UserName / StateName / Duration of ... See more...
Hello, First of all, sorry for my lack of knowledge if my question looks silly.   I have a datasource providing events as follows : State Start Timestamp / UserName / StateName / Duration of the State in seconds / State End Timestamp   I'm trying to produce a timechart that is showing the duration in each state for each user with a 1h span so that we could see clearly the time spent in each state by the users for each hour of the day.   The issue is that a user can start a state at a given time and have a duration bigger than 1h.   For exemple, a user logs in and is available at 8:32 and it stays in "available" state during 2h.   What I get so far with a basic timechart span=1h of the states by user : 2h in 8h span nothing in 9h span nothing in 10h span   I would need to manipulate the query or the events in a way that will make the timechart report in this example : 28 min in 8h span 1 hour in 9h span 32 min in 10h span   as the state lasted between 8h32 and 10h32.   Here's my query today :   | eval AvailableDuration = if(State="Available",Duration,0) | eval BusyDuration = if(State="Busy",Duration,0) | eval CallDuration = if(State="In Call",Duration,0) | timechart span=1h fixedrange=false useother=f limit=0 sum(CallDuration) as "In call" sum(AvailableDuration) as "available" sum(BusyDuration) as "Busy" by UserName     Is there a way to redistribute the durations by manipulating data so that each hourly span is properly populated ?   Thanks in advance for your help !
Hi, Say I have this table: Name Date Flows a 2022-06-13 23:01:26 200 a 2022-06-13 10:01:26 301 b 2022-06-13 23:01:26 504 b 2022-06-13 10:01:26 45... See more...
Hi, Say I have this table: Name Date Flows a 2022-06-13 23:01:26 200 a 2022-06-13 10:01:26 301 b 2022-06-13 23:01:26 504 b 2022-06-13 10:01:26 454   I'd like to create a table that's using the values of "Date" column as a new columns, and grouping all the identical "Name" values into one line as follows (where the values are "Flows"): Name 2022-06-13 23:01:26 2022-06-13 10:01:26 a 200 301 b 504 454   I tried several approaches but failed. Could you assist?
Need a similar query for Splunk.   SELECT a.[CUSTOMER ID], a.[NAME], SUM(b.[AMOUNT]) AS [TOTAL AMOUNT] FROM RES_DATA a INNER JOIN INV_DATA b ON a.[CUSTOMER ID]=b.[CUSTOMER ID] GROUP BY a.[CUSTOME... See more...
Need a similar query for Splunk.   SELECT a.[CUSTOMER ID], a.[NAME], SUM(b.[AMOUNT]) AS [TOTAL AMOUNT] FROM RES_DATA a INNER JOIN INV_DATA b ON a.[CUSTOMER ID]=b.[CUSTOMER ID] GROUP BY a.[CUSTOMER ID], a.[NAME]