All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In Splunk Enterprise 8.1, when using chart with spans containing fractional values of 0.54, 0.95, and others that result in rounding errors, duplicate bins are created. For example: | makeresults... See more...
In Splunk Enterprise 8.1, when using chart with spans containing fractional values of 0.54, 0.95, and others that result in rounding errors, duplicate bins are created. For example: | makeresults count=1000 | eval x=(random()/2147483647)*20 | chart count over x span=0.54 generates a duplicate bin at 9.72-10.26 with a count of 0. | chart count over x span=1.54 generates a duplicate bin at 15.40-16.94 with a count of 0. | chart count over x span=2.54 generates a duplicate bin at 15.24-17.78 with a count of 0. Changing the x values results in different outcomes, of course, but rounding appears to be the cause. | makeresults count=1000 | eval x=(random()/2147483647)*1000 | chart count over x span=10.54 generates duplicate bins at 31.62-42.16, 52.70-63.24, 94.86-105.40, and 642.94-653.48 with a count of 0. | makeresults count=1000 | eval x=(random()/2147483647)*1000 | chart count over x span=10.95 generates duplicate bins at 32.85-43.80 and 153.30-164.25 with counts of 0. | makeresults count=1000 | eval x=(random()/2147483647)*200000 | chart count over x span=100.54 generates 258 duplicate bins with counts of 0. I haven't tested earlier versions of Splunk yet, but I'm curious if others are seeing the same issue. My personal Splunk account isn't attached to a support agreement, so I can't submit a bug report.
Hi I need to calculate a sum of different counters from several sourcetypes. They are located in one index, but simple operations give no results for some reason when counters are taken from differe... See more...
Hi I need to calculate a sum of different counters from several sourcetypes. They are located in one index, but simple operations give no results for some reason when counters are taken from different sourcetypes. For example: host=host1 sourcetype IN (st1,st2,st3,st4,st5,st6) | sort 0 - _time |reverse | eval totals=c0+c1+c2+c3 | delta totals as dtot | timechart span=5m per_second(dtot) works fine when c0-c3 are all from the same sourcetype. But when i add at least one counter from another sourcetype - there's no data. Can you explain what's wrong with that? Am i violating some important constraints within splunk?  Is there a workaround for this ? Cheers, Alex
Hi, I am new to machine learning and trying to process data with mltk. I am trying to further automate the process in the following ways: Preprossessing. Do I need to preprosses my data? how? Do I... See more...
Hi, I am new to machine learning and trying to process data with mltk. I am trying to further automate the process in the following ways: Preprossessing. Do I need to preprosses my data? how? Do I need to remove outliers? Params to predict with. How do I choose which parameters to predict with? Algorithm parameters configuration. what values should I put as the parameters to the algorithms? Results benchmark. how can I compare all algorithms and use the best one? what is a good "score"? In a dream world, it would be cool to have a place to upload the CSV and mltk would tell me which algorithm to use and which parameters to set by running a multi brute force of all algorithms and parameter permutations etc. Is there any advice or feature that would get me closer to this purpose?  
Have a nice day, everyone! I need to export dashboards from splunk enterprise in any format (pdf, png etc.) to telegram channels, how can I do this? Perhaps there are special addons for this? Or self... See more...
Have a nice day, everyone! I need to export dashboards from splunk enterprise in any format (pdf, png etc.) to telegram channels, how can I do this? Perhaps there are special addons for this? Or self-written scripts or something else. I would be grateful for any information
Hello,   I am attempting to install splunk on a fresh install of an ubuntu server 20 VM. This VM is on ESXi, with a pfSense VM running to route traffic, and a few other VM's to create a malware ana... See more...
Hello,   I am attempting to install splunk on a fresh install of an ubuntu server 20 VM. This VM is on ESXi, with a pfSense VM running to route traffic, and a few other VM's to create a malware analysis lab. My issue right now, is I cannot seem to get into the web interface for Splunk Enterprise. I have tried several things; I have made sure that splunk is running on port 8000 and listening, I have disabled iptables, added a pass everything rule on each of my pfSense interfaces, ensured that it is started, the status says that it started correctly, and no errors in the log files. I am currently able to ping from the VM that splunk is on to my laptop, but am not able to ping from my laptop to my VM. I can open any of the other VM's. I opened wireshark and attempted to look at what was happening, and it seems that when I send a ping to the laptop from the splunk vm, it comes through to my laptop successfully. When i send it from my laptop to the VM, it does not get any response and keeps retrying the packets. Can anyone help?
Hi , I am trying to attach external document(1.5MB size) with the  alerts getting triggered, so that user can refer to the document and take action.   But not able to find any option to do so, ple... See more...
Hi , I am trying to attach external document(1.5MB size) with the  alerts getting triggered, so that user can refer to the document and take action.   But not able to find any option to do so, please guide, if any one has tried such use case.
I want to add one panel in my dashboard which shows only bulletin message ... How should i proceed with this .. Is this is possible with xml or do i need js ? Please suggest 
Hello, I'm trying to add additional columns/fields from an additional CSV table lookup at the end of the table part in a search syntax to create a Report as below, but I'm not sure if that is possibl... See more...
Hello, I'm trying to add additional columns/fields from an additional CSV table lookup at the end of the table part in a search syntax to create a Report as below, but I'm not sure if that is possible as is not working, I just get a couple of blank additional columns with some error names. sourcetype=ib:ipam:network index=ib_ipam | eval dedup_key=view."/".address."/".cidr | dedup dedup_key | eval Network_CIDR=address."/".cidr | search view = "Ashland" | ................................................................................................ | table Timestamp, "Network View", Network, CIDR, Total, Allocated, Reserved, Assigned, Protocol, "Utilization %", Unmanaged, [|inputlookup Ashland-Networks-EAs.csv |search Network = Network_CIDR |table Network, Region_DDI] Any help would be very appreciated. Thanks, Omar.
So here is my existing query as it runs now sourcetype=snort [search sourcetype=snort  |top limit=20 src| table src] | stats count, values(signature) as Sigs by src | sort -count | lookup dnslo... See more...
So here is my existing query as it runs now sourcetype=snort [search sourcetype=snort  |top limit=20 src| table src] | stats count, values(signature) as Sigs by src | sort -count | lookup dnslookup clientip as src OUTPUT clienthost as DST_RESOLVED | iplocation src | fields src, count, Country, DST_RESOLVED, Sigs | rename src as "Source IP", count as Count, DST_RESOLVED as "DNS Resolution", Sigs as Signatures I am not the original builder of this query but I am editing it. these are normalized snort logs. Id like to return the top 20 signatures by source, while displaying source (src), count, country,  dns rsolution (dnslookup) and signature (sigs) There are signatures i want to completely exclude by (sig_id), and then there are signatures i would like to exclude where signature has specific src or cidr range. I seem to be creating unbalanced parenthesis when trying my boolean expressions or Wheres. Please assist
Hi, I have an event json similar to: {"stages":[{"duration":12,"status":"Success","children":[{"test":"integration","result":"passed"},{"test":"regression","result":"failed"}]},{"duration":1.5,"s... See more...
Hi, I have an event json similar to: {"stages":[{"duration":12,"status":"Success","children":[{"test":"integration","result":"passed"},{"test":"regression","result":"failed"}]},{"duration":1.5,"status":"Success","children":[{"test":"unit","result":"passed"},{"test":"regression","result":"passed"}]},{"duration":3.1,"status":"Success","children":[{"test":"integration","result":"passed"},{"test":"unit","result":"failed"}]}]} where children is a list of maps inside a list of maps.  The problem is that this list is so large that it exceeds the 10000 character limit.  I don't have admin access so cannot increase this limit.  What I would like to is remove the children field inside of each map in the stages list.  I've tried numerous attempts without any luck.  Anyone know of a way to do this? Thanks
I have this query index=some_index | timechart limit=15 useOther=false count by acct_id and it needs to run up to a time period of one month. The current time it takes to run is very long and the amo... See more...
I have this query index=some_index | timechart limit=15 useOther=false count by acct_id and it needs to run up to a time period of one month. The current time it takes to run is very long and the amount of events it looks at is around 70 million a day. I could accelerate the report but even then it takes awhile to complete the chart even when it says it's scanned 100% of the time period. 
Hello, I must be really tired.  Cannot find the Add New Response Action, which is part of setting up my new ES.  Can anyone help? Thank You!
I'm interested in creating an alert scheduled to run every 60 minutes, that will search for hosts which have had > 85% CPU load over a span of 5 minutes. Here's the search: index=index sourcetype=cp... See more...
I'm interested in creating an alert scheduled to run every 60 minutes, that will search for hosts which have had > 85% CPU load over a span of 5 minutes. Here's the search: index=index sourcetype=cpu | streamstats time_window=5min latest(cpu_load_percent) count by host | eval cpu_load_percent=if(count<18,null,round(cpu_load_percent, 2)) | where cpu_load_percent>85 | dedup host | table host, _time, cpu_load_percent From there, I would like a report generated, wherein for each host a timechart is provided for the last 60 minutes, showing CPU %s for each of the processes run on that host. Ideally this will be a line chart, with a line for each of the top 10 CPU-heavy processes. I've tried using | transaction, and this is what I have so far: index=index sourcetype=cpu AND sourcetype=top host=$host$ | timechart latest(cpu_load_percent) by COMMAND I'd really appreciate any guidance on how to implement an alert of this type.
  index="wineventlog" sourcetype="script:installedapps" | chart values(*) as * count by DisplayName | fields DisplayName, DisplayVersion, host, count | search DisplayName=* | sort host, DisplayNa... See more...
  index="wineventlog" sourcetype="script:installedapps" | chart values(*) as * count by DisplayName | fields DisplayName, DisplayVersion, host, count | search DisplayName=* | sort host, DisplayName   How I can visual this in pie chart?  I would like to show in pie chart with top 10 software and counts.  Allow drill down to each of the software.
This is a pretty specific use case but was difficult to work through.  Documenting for future generations.
Hi Splunk community, I am trying to determine the impact of removing Adobe Flash from our environment. I have done basic search and the results returned are much higher than expected. This would mo... See more...
Hi Splunk community, I am trying to determine the impact of removing Adobe Flash from our environment. I have done basic search and the results returned are much higher than expected. This would most probably be because staff are accessing external content as well as internally hosted. Is it possible to have a query that tells me which url has invoked flash player? I have tried: event_simpleName=ProcessRollup* FileName=FlashUtil*_ActiveX.exe and FileName=Flash*.ocx Neither of them return dns requests or url. So far to get some answers I do a separate search (query) on the host based on the timestamp (of the results of above query) looking up the dns request. Example result: Domainname:   host:                      user:      filename:             commandline: ssl.gstatic.com   computer123     user123                iexplore.exe      "C:\Program Files\Internet Explorer\iexplore.exe"    https: // docs.google.com/spreadsheets/z/xyz/edit?usp=drive_web Most DNS requests are within fraction of the second or +1 second. Finding a computer with useful data is a draw of the luck and very time consuming. Is anyone able to help with the above query?
Hello , Please help on the below: it should look like below 2 rows search by employeeid(hyperlink) search by app(hyperlink) once clicked on above  hyperlinks it should open new search with search... See more...
Hello , Please help on the below: it should look like below 2 rows search by employeeid(hyperlink) search by app(hyperlink) once clicked on above  hyperlinks it should open new search with search query index = x  | search employeeid =123 index= x | search app = abc   Please help on this. Thanks in advance
Hello everyone, I just updated my splunk from version 7.2.6 to version 8.1.2, but I have an app called "splunk_app_infrastructure" and it is throwing me the following error: Unable to initialize mod... See more...
Hello everyone, I just updated my splunk from version 7.2.6 to version 8.1.2, but I have an app called "splunk_app_infrastructure" and it is throwing me the following error: Unable to initialize modular input "em_entity_migration" defined in the app "splunk_app_infrastructure": Introspecting scheme = em_entity_migration: script running failed (exited with code 1). Has someone the same happened to him? I leave a ScreenShot of the error Beforehand thank you very much  
Hi, I was trying to achieve that particular sourcetype logs should reach to target 1 and not to target 2.   Even i tried to send particular logs event to target 1 and rest of the event should be d... See more...
Hi, I was trying to achieve that particular sourcetype logs should reach to target 1 and not to target 2.   Even i tried to send particular logs event to target 1 and rest of the event should be discard but it is not working at all.   Below configuration is on heavy forwarder. inputs.conf [tcp://1514] sourcetype = syslog connection_host = dns _TCP_ROUTING=target1-----------When transforms.conf was not working then i define the tcp routing in inputs.conf then it stopped sending the logs to target2 but filtering of the event is not working. In tranforms.conf  [vmwarelogs] REGEX=(logged out|Rejected password for user|Cannot login|logged in as|Accepted user for user|was updated on host|Password was changed for account|Destroy VM called) DEST_KEY=_TCP_ROUTING FORMAT=target1 [discarlogs] REGEX = . DEST_KEY = queue FORMAT = nullQueue   In props.conf   [vmw-syslog] Tranforms-routing=vmwarelogs,discarlogs       Outputs.conf    target1  target2---- defualt group     1. Issue number 1 : when i have defined the target group in tranforms.conf to send the logs to target 1 and not to target 2 but still it target 2 was getting the llogs    Then i have define in inputs.conf itselft then i achived the first objective just to forward the logs to target1 but in this case inputs.conf will take the precedence and transforms.conf filter is not working. _TCP_ROUTING=target1-----------When transforms.conf was not working then i define the tcp routing in inputs.conf then it stopped sending the logs to target2 but filtering of the event is not working.   Issue 2: I wanted to send the specific logs to target1 and rest of the event need to be discard and if calling the nullquue configuraiton in props.conf then it is not sending the logs at all.     Please help me out how i can achive my objective.   Is that possible to whitelist attribute in input.conf itself ??? 
I tried many ways to generate a properly formated json for splunk to parse it so i can put some alerts on my data but  no  success yet  , nad the log is really simple    2021-02-19T18:35:43,878Z [m... See more...
I tried many ways to generate a properly formated json for splunk to parse it so i can put some alerts on my data but  no  success yet  , nad the log is really simple    2021-02-19T18:35:43,878Z [main] INFO dev-AniMatchIngester - { "createTS":"2021-02-19T10:35:43Z", "accountId":"333333", "correlationId":"1112222", "msgType":"raw_published", "Outcome":"Success", "eventOccurrenceTimestamp":"2020-01-14 08:12:07.111", "Type":"TEST", "eventType":"Success" }   I need to i deally want these fields in intresting fields section or at the least i should be able to do a meaning queries based on eventOccurrenceTimestamp  > today etc., or Type =="test" comparisons it was so easy in elk .