All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have multiple checkboxes which depending on the selections, it would hide or show different panels. Consider one panel for one checkbox. There's no problem if I were to select only 1 checkbox. But ... See more...
I have multiple checkboxes which depending on the selections, it would hide or show different panels. Consider one panel for one checkbox. There's no problem if I were to select only 1 checkbox. But if I were to select multiple checkboxes, it doesnt display the multiple panels.   <input type="checkbox" token="search_option"> <label>Search By</label> <choice value="sports">Sports</choice> <choice value="news">News</choice> <choice value="movies">Movies</choice> <change> <!-- Conditionally set/unset panels based on selections made --> <condition value="sports"> <set token="input_sports">true</set> <unset token="input_news"></unset> <unset token="input_movies"></unset> <set token="output_sports">true</set> <unset token="output_news"></unset> <unset token="output_movies"></unset> </condition> <condition value="news"> <set token="input_news">true</set> <unset token="input_sports"></unset> <unset token="input_movies"></unset> <set token="output_news">true</set> <unset token="output_sports"></unset> <unset token="output_movies"></unset> </condition> <condition value="movies"> <set token="input_movies">true</set> <unset token="input_sports"></unset> <unset token="input_news"></unset> <set token="output_movies">true</set> <unset token="output_news"></unset> <unset token="output_sports"></unset> </condition> <condition match="$search_option$==&quot;sports news movies&quot; OR $search_option$==&quot;sports movies news&quot; OR $search_option$==&quot;news sports movies&quot; OR $search_option$==&quot;news movies sports&quot; OR $search_option$==&quot;movies sports news&quot; OR $search_option$==&quot;movies news sports&quot;"> <set token="input_sports">true</set> <set token="input_news">true</set> <set token="input_movies">true</set> <set token="output_sports">true</set> <set token="output_news">true</set> <set token="output_movies">true</set> </condition>   Specifically,  when I select all 3 boxes (sports, news and movies), it doesn't output the 3 panels. I think it doesn't output because the individual checkbox already has a condition to unset the other panels if only one checkbox is selected. If I remove the unset condition for the individual checkbox,  then it would not remove the other panels if only a particular checkbox is selected. So, how do: 1) If only one checkbox is selected, output only the specific panel 2) If two or more checkboxes are selected, then output the respective checkboxes Thank you.
Hi, We are trying to configure the Palo Alto add-on v 6.2 inputs on a search head cluster , but the account page is not loading just keep spinning. It's working fine on a stand a lone instance (HF)... See more...
Hi, We are trying to configure the Palo Alto add-on v 6.2 inputs on a search head cluster , but the account page is not loading just keep spinning. It's working fine on a stand a lone instance (HF) but that will not populate the minemld KV store lookups on the SHC . Any ideas if this addon inputs can actually work on SHC?  Thanks
I am integrating Qualys and Splunk using TA-Qualys Add-on and, to create my dashboards and filter the events, I would like to use the "client_id" field, since I use a multi tenant subscription. Wha... See more...
I am integrating Qualys and Splunk using TA-Qualys Add-on and, to create my dashboards and filter the events, I would like to use the "client_id" field, since I use a multi tenant subscription. What change I have to perform to pull this extra data from Qualys using TA-qualys Add-on?
Hello I am working in a report and I need help with some delta columns that I added to it. My idea is to send this monthly. So far my query is this:   index=metricas_soc_summary source=TMCS_I... See more...
Hello I am working in a report and I need help with some delta columns that I added to it. My idea is to send this monthly. So far my query is this:   index=metricas_soc_summary source=TMCS_INCAPSULA_SUMMARY_MENSUAL earliest=-90d@d latest=@mon | rename attack as "Tipo de ataque" | timechart span=1mon cont=FALSE count as "Cantidad de ataques" values(mes) as mes by "Tipo de ataque" | eval mes=strftime(_time, "%m"), mes=case(mes="01", "enero", mes="02", "febrero", mes="03","marzo",mes="04","abril",mes="05",mayo,mes="06","junio",mes="07","julio",mes="08","agosto",mes="09","septiembre",mes="10","octubre",mes="11","noviembre",mes="12","diciembre",1==1,null()) | delta "Cantidad de ataques: Ataque de DDoS" | delta "Cantidad de ataques: Ataques de autenticación" | delta "Cantidad de ataques: Ataques externos"   (dont mind the eval mes, because i will use it to send the previous month value $result.mes$ in the email body, if posible after the transpose?) This generates this table: it has to look like this: if possible, I'd adore the deltas expressed in % too. but not entirely neccessary. and if possible, it has to still have the value of month (mes in spanish) to add it to the email body, but its not neccesary. Thanks for the help
Hi There, I'm trying to get the logs forwarded from containers in Kubernetes over to Splunk using HEC. Fluentd has been deployed and fluent.conf is updated with the below in the Config Map. FluentD ... See more...
Hi There, I'm trying to get the logs forwarded from containers in Kubernetes over to Splunk using HEC. Fluentd has been deployed and fluent.conf is updated with the below in the Config Map. FluentD deployed has the HEC plugin installed that I got from here https://github.com/splunk/fluent-plugin-splunk-hec           kind: ConfigMap apiVersion: v1 metadata: name: fluentd-config namespace: fluentd-logging selfLink: /api/v1/namespaces/fluentd-logging/configmaps/fluentd-config uid: 8769fb6d-6c7a-4ba4-bce9-597e373fafe3 resourceVersion: '85955' creationTimestamp: '2020-07-29T10:26:44Z' labels: k8s-app: fluentd-splunk annotations: kubectl.kubernetes.io/last-applied-configuration: > {"apiVersion":"v1","data":{"fluent.conf":"\u003cmatch fluent.**\u003e\n @type null\n\u003c/match\u003e\n\u003csource\u003e\n @type tail\n @log_level fatal\n @id in_tail_container_logs\n path /var/log/containers/*.log\n pos_file /var/log/fluentd-containers.log.pos\n tag \"#{ENV['FLUENT_CONTAINER_TAIL_TAG'] || 'kubernetes.*'}\"\n read_from_head true\n \u003cparse\u003e\n @type multi_format\n \u003cpattern\u003e\n format json\n time_key time\n time_format %Y-%m-%dT%H:%M:%S.%NZ\n \u003c/pattern\u003e\n \u003cpattern\u003e\n format /^(?\u003ctime\u003e.+) (?\u003cstream\u003estdout|stderr) [^ ]* (?\u003clog\u003e.*)$/\n time_format %Y-%m-%dT%H:%M:%S.%N%:z\n \u003c/pattern\u003e\n \u003c/parse\u003e\n\u003c/source\u003e\n\u003cfilter **\u003e\n @type concat\n key log\n separator \"\"\n multiline_start_regexp /^\\d{2}/\n flush_interval 1\n timeout_label @NORMAL\n\u003c/filter\u003e\n\u003cmatch **\u003e\n @type relabel\n @label @NORMAL\n\u003c/match\u003e\n\u003clabel @NORMAL\u003e\n \u003cfilter **\u003e\n @type record_transformer\n @id filter_containers_stream_transformer\n \u003crecord\u003e\n stream_name ${tag_parts[3]}\n \u003c/record\u003e\n remove_keys $.kubernetes.pod_id, $.kubernetes.master_url, $.kubernetes.container_image_id, $.kubernetes.namespace_id, $.kubernetes.labels, $.docker, $.stream, $.stream_name, $.kubernetes.container_name, $.kubernetes.container_image, $.kubernetes.host, $.kubernetes.namespace_name\n \u003c/filter\u003e\n \u003cfilter kubernetes.**\u003e\n @type kubernetes_metadata\n @id filter_kube_metadata\n kubernetes_url \"#{ENV['FLUENT_FILTER_KUBERNETES_URL'] || 'https://' + ENV.fetch('KUBERNETES_SERVICE_HOST') + ':' + ENV.fetch('KUBERNETES_SERVICE_PORT') + '/api'}\"\n verify_ssl \"#{ENV['KUBERNETES_VERIFY_SSL'] || true}\"\n ca_file \"#{ENV['KUBERNETES_CA_FILE']}\"\n \u003c/filter\u003e\n \u003cfilter **\u003e\n @type record_transformer\n enable_ruby true\n \u003crecord\u003e\n kubernetes ${record[\"kubernetes\"].merge({\"cluster_name\":\"test-fluentd-cnative1\"})}\n \u003c/record\u003e\n \u003c/filter\u003e\n \u003cmatch kubernetes.**\u003e\n @type rewrite_tag_filter\n \u003crule\u003e\n key $['kubernetes']['namespace_name']\n pattern ^(.+)$\n tag $1.${tag}\n \u003c/rule\u003e\n \u003c/match\u003e\n \u003cmatch default.kubernetes.**\u003e\n @type file\n path /var/log/fluent/upgrade_default*.log\n \u003cformat\u003e\n @type json\n \u003c/format\u003e\n buffer_type file\n buffer_path /var/log/upgrade_default*.log\n time_slice_format %Y-%m-%d.%H%M\n append true\n flush_interval 1s\n \u003c/match\u003e\n\u003c/label\u003e\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"k8s-app":"fluentd-splunk"},"name":"fluentd-config","namespace":"fluentd-logging"}} data: fluent.conf: | <match fluent.**> @type null </match> <source> @type tail @log_level fatal @id in_tail_container_logs path /var/log/containers/*.log pos_file /var/log/fluentd-containers.log.pos tag "#{ENV['FLUENT_CONTAINER_TAIL_TAG'] || 'kubernetes.*'}" read_from_head true <parse> @type multi_format <pattern> format json time_key time time_format %Y-%m-%dT%H:%M:%S.%NZ </pattern> <pattern> format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/ time_format %Y-%m-%dT%H:%M:%S.%N%:z </pattern> </parse> </source> <filter **> @type concat key log separator "" multiline_start_regexp /^\d{2}/ flush_interval 1 timeout_label @NORMAL </filter> <match **> @type relabel @label @NORMAL </match> <label @NORMAL> <filter **> @type record_transformer @id filter_containers_stream_transformer <record> stream_name ${tag_parts[3]} </record> remove_keys $.kubernetes.pod_id, $.kubernetes.master_url, $.kubernetes.container_image_id, $.kubernetes.namespace_id, $.kubernetes.labels, $.docker, $.stream, $.stream_name, $.kubernetes.container_name, $.kubernetes.container_image, $.kubernetes.host, $.kubernetes.namespace_name </filter> <filter kubernetes.**> @type kubernetes_metadata @id filter_kube_metadata kubernetes_url "#{ENV['FLUENT_FILTER_KUBERNETES_URL'] || 'https://' + ENV.fetch('KUBERNETES_SERVICE_HOST') + ':' + ENV.fetch('KUBERNETES_SERVICE_PORT') + '/api'}" verify_ssl "#{ENV['KUBERNETES_VERIFY_SSL'] || true}" ca_file "#{ENV['KUBERNETES_CA_FILE']}" </filter> <filter **> @type record_transformer enable_ruby true <record> kubernetes ${record["kubernetes"].merge({"cluster_name":"test-fluentd-cnative1"})} </record> </filter> <match kubernetes.**> @type rewrite_tag_filter <rule> key $['kubernetes']['namespace_name'] pattern ^(.+)$ tag $1.${tag} </rule> </match> <match default.kubernetes.**> @type splunk_hec hec_host splunk-host.com hec_port 8088 hec_token xxxxxxxxxx index test source ${tag} sourcetype _json <format> @type json </format> </match> </label>          Currently, however,  it's not working for some reason. Any insights on what config change is required to the above will be greatly appreciated.   Cheers, Rachael  
I have a search that is giving me this data set: ID             status       Stamp alex         esb            1595989827764 alex         fuz             1595989827762 jake         esb           ... See more...
I have a search that is giving me this data set: ID             status       Stamp alex         esb            1595989827764 alex         fuz             1595989827762 jake         esb            1596056447122 jake         fuz             1596056447085 josh         esb            1596054751935 josh         fuz             1596054751852 stefan    esb             1596056406846 stefan    fuz              1596056406806 I want to compare the Stamp by ID, and show any ID's where the stamp for esb is great than the stampe for fuz by at least 100. Any help appreciated.
I have a search yielding data from three different email fields, call them msg.header.to{}, msg.header.cc{} and orig_recipient.  I am looking to see if the email address contained within orig_recipie... See more...
I have a search yielding data from three different email fields, call them msg.header.to{}, msg.header.cc{} and orig_recipient.  I am looking to see if the email address contained within orig_recipient matches either of the other two.  The issue is that Splunk captures the data differently in the msg.header columns. For example, the msg.header columns output is "Smith, Joe <joe.smith@email.com>", while the output in the orig_recipient would only be "joe.smith@email.com".   So, when I ask Splunk to tell me if the orig_recipient email address is in the msg.header.to{}, I get a negative.  I have tried Like, If, Where and others, along with using wildcards but maybe my syntax is wrong.     I am looking to see how I can search within a field using the value of another field as the search parameter.  Also, if that is not possible, extracting the data between the <> and putting it into another field to compare off of that field might work. Thank you for your time and attention to this matter.      
I have a panel on my dashboard that is a list of transactions. I edited the drill-down to link to the search of the transaction when I click on one of the transactions on the panel. However, the sear... See more...
I have a panel on my dashboard that is a list of transactions. I edited the drill-down to link to the search of the transaction when I click on one of the transactions on the panel. However, the search that it links to does not show the transaction successfully because the time range is not set correctly. The search gets the beginning time of the transaction correct, but it sets the end time as only 1 second after the beginning time. How do I change this automatic 1 second interval in the search to a 2 minute interval? This is what I have right now. Using the _time category in the transaction (attached), I've tried to extract the beginning time using the following code     <drilldown> <eval token="drilldown.earliest">strptime($row._time$,"%Y-%m-%dT%H:%M:%S.%3N%:z")</eval> <eval token="drilldown.latest">strptime($row._time$,"%Y-%m-%dT%H:%M:%S.%3N%:z") + 2m</eval> <link target="_blank">search?<...>;earliest=$drilldown.earliest$;latest=$drilldown.latest$</link> </drilldown>     But this gives the error " Invalid earliest_time.". I suspect I messed something up in the strptime command, does anyone have a fix?
I am trying to figure out the best way to utilize a regkey we set on Windows server which indicates the Environment the server is for.  And then being able to use that to help limit the returned sear... See more...
I am trying to figure out the best way to utilize a regkey we set on Windows server which indicates the Environment the server is for.  And then being able to use that to help limit the returned search events to that type of Environment. Meaning, we have the regkey set on our test servers as test.  So i would like to be able to set something like and eventtype or tag or some other simple way where I can read that regkey on all of our Windows servers and when users go to search they could do something like: host=web* eventtype=test And all of the events for the test servers would be show in the timeframe.  And if a user instead did the search: host=web* eventtype=preprod It would show all of the event for the preprod servers. This does not HAVE to be an eventtype but just using that as a field which is indicating the registry key value without having to create some extra search just for that every time as a user.   The reading of the regkey should also be such that if a server has the regkey changed from test to preprod then that is picked up and going forward the server would show as preprod. We use to use an eventtype that was basically equal to host="*PP*" but our servers now are getting more random names so that no longer will work. Thanks.
Hi I have several log files that add to Splunk, now try to search this string: index="Myindex" | search "HQL query plan in cache (select i from table1 i where" splunk return 198 events but when i ... See more...
Hi I have several log files that add to Splunk, now try to search this string: index="Myindex" | search "HQL query plan in cache (select i from table1 i where" splunk return 198 events but when i try to grep on the shell return 242!   FYI 1: time scope set to “all time” FYI 2: index process completed. Thanks,  
I want to use the setfields command to set fieldA to a particular value.  That value is located in fieldB.  How can I make setfields take the value of the field rather then the field name.  setfields... See more...
I want to use the setfields command to set fieldA to a particular value.  That value is located in fieldB.  How can I make setfields take the value of the field rather then the field name.  setfields fieldA=fieldB sets A to the string "fieldB".   Thanks.
How do you wrap text in the "Show Source" page, after clicking on Events Actions > Show Source?
Hi all, I have developed an app that has a custom dashboard. On that custom dashboard, I am using Splunk's JavaScript Web Framework to run my custom searches that call our external REST API and then... See more...
Hi all, I have developed an app that has a custom dashboard. On that custom dashboard, I am using Splunk's JavaScript Web Framework to run my custom searches that call our external REST API and then the dashboard is rendered using results returned from those searches. Specifically, I'm using the Search Manager to define and process results from my searches. The code structure that I'm following for each search is as follows:   var phishInc = new SearchManager({ id: "phishing_inc", preview: true, cache: true, search: "| snxusers stat=phishing_breakdown globalFilterValue=$globalFilterValue$" }, {tokens: true}); phishInc.on('search:failed', function(properties) { }); phishInc.on('search:progress', function(properties) { }); phishInc.on('search:done', function(properties) { }); var phishing_inc_search = splunkjs.mvc.Components.get('phishing_inc'); var phishing_inc_results = phishing_inc_search.data("results", {count: 0, output_mode: 'json_rows'}); phishing_inc_results.on("data", function () { // The data from the search is processed here });   $globalFilterValue$ is a token that I have defined whose value I set from a drop-down menu. Whenever I set its value, my searches are triggered automatically as I have set tokens: true  Now I have observed that for a single search only, the results are returned pretty quickly but when I define all of my searches  (total = 15) their times add up and the complete dashboard is rendered slowly. Since all of those searches depend on the globalFilterValue token, they are probably running in a sequential manner due to which the last parts of the dashboard are rendered at the end. Is there any way to speed up the execution of all these searches by somehow running them in a parallel fashion? Does Splunk JavaScript Web Framework allow any such possibility?
How to find ip suspicious address that have accessed a host?  I have a list of host ip's,but I need a splunk search that will list all the Ip address that have accessed my host?   Thank you,
Hi, I have an external lookup I've written - and to future proof it I've written it for python3. I have put python. version = python3 in the local transforms.conf, but if I edit the transform via t... See more...
Hi, I have an external lookup I've written - and to future proof it I've written it for python3. I have put python. version = python3 in the local transforms.conf, but if I edit the transform via the GUI that setting disappears and it without it there it reverts back to python2 as default and fails. I guess this _could_ go into default - but is there a cleaner way to do this? (In testing I can put the python version in the system.conf - but I won't be able to do that in prod ). So much for future-proofing :(: Running 8.0.5.  
Hi, i'm trying to filter values greater than zero. I have this search: index="prod_super_cc" source=ETL_GRO_01ReadMessagesKafka| spath input=data.Orders | search "{}.LineRusherTransaction"="*" | s... See more...
Hi, i'm trying to filter values greater than zero. I have this search: index="prod_super_cc" source=ETL_GRO_01ReadMessagesKafka| spath input=data.Orders | search "{}.LineRusherTransaction"="*" | stats values({}.LineRusherTransaction) as LRTransactions it brings some results including zero values and greater than zero values LRTransactions 0 48580100196 48580100231 48580100687 48580100744 48580100909 48580100910 48580101088 48580101119 48580101320  But i want to remove zero values. I've tried using: | search "{}.LineRusherTransaction">"0" | search "{}.LineRusherTransaction">0 also | where LRTransactions>0 (No results) I've tried with index="prod_super_cc" source=ETL_GRO_01ReadMessagesKafka| spath input=data.Orders | search "{}.LineRusherTransaction"="*" | table {}.LineRusherTransaction | where "{}.LineRusherTransaction" > 0 Message says: Error in 'where' command: Type checking failed. The '>' operator received different types.  Without a expected result. I just want to filter values by removing zero values. Could you please help me please? Thank you 
I'm trying to perform a search that will be used for a notable event that looks for the creation of a load balancer listener on port 80 which is very straightforward:   eventName=CreateListener req... See more...
I'm trying to perform a search that will be used for a notable event that looks for the creation of a load balancer listener on port 80 which is very straightforward:   eventName=CreateListener requestParameters.port=80   However, I only want the notable event to trigger if the result from the search above was applied to an internet facing load balancer which means I'd have to search backwards (with the timestamp of the search above as the start time), for the first result I find of:   eventName=CreateLoadBalancer requestParameters.scheme=internet-facing   I also need to ensure that the load balancer where the listener was created is the same as what is found (if anything) from the CreateLoadBalancer event. In other words, requestParameters.loadBalancerArn (from the CreateListener event) needs to equal responseElements.loadBalancers{}.loadBalancerArn (from the CreateLoadBalancer event).  I'm not necessarily looking for someone that can write this for me (though that would be helpful as well), but if someone could at least point me in the right direction I haven't had much luck searching the forums and documentation for doing exactly what I'm trying to attempt here. Thank you. 
I wanted to create multiple timecharts in a single search. The scenario i am stuck in is something like this : index = "A" sourcetype = "B" | where Activity_type = "Activity1" | timechart span=10m c... See more...
I wanted to create multiple timecharts in a single search. The scenario i am stuck in is something like this : index = "A" sourcetype = "B" | where Activity_type = "Activity1" | timechart span=10m count by Event_Type There are multiple activity_type fields and i want multiple timecharts by Event_Type for different Activity_type in a single search. Thanks in advance for your help.
Hello, I have question about throttling in correlation searches. I understand how throttling works, but I need something more... What I mean: I have correlation search with some response actions (c... See more...
Hello, I have question about throttling in correlation searches. I understand how throttling works, but I need something more... What I mean: I have correlation search with some response actions (create notable event and create SNOW ticket). Throttling is configured - window duration is 1 hour and fields to group by are "user" and "dest". So, for example, if user would be user1 and destination would be dest1 and response actions are triggered, no more response actions for given combination of user1 and dest1 will be triggered for the next 1 hour. Fine. My question is: imagine that response actions for (user1, dest1) were triggered. Is there any way how to set Splunk to suppress response actions for (user1, dest1) for i.e. next 1 week, but keep window duration 1 hour for all other combinations of (user,dest)? Use case for this would be: imagine SOC analyst, investigating alert for (user1,dest1), reveals the root cause of this, but it cannot be fix up immediately and alert for (user1, dest1) will be generated for next few days. Which is annoying, so analyst would like to have option to suppress alerts just for (user1,dest1) for next few days. Thank you very much for any hint. Best regards Lukas
HI, While use chart command i am getting null values for status in search and the same in dashboard i do not see in the panel. I am trying to get distinct count of run_id for each values(col1,col2,... See more...
HI, While use chart command i am getting null values for status in search and the same in dashboard i do not see in the panel. I am trying to get distinct count of run_id for each values(col1,col2,col,3...)  This i am seng in the search head. Name col1 col2 col3 col4 abc123 21 40     xyz789 35 50       In Dashboard, panel shows below table missing with col3 ans col4 ID col1 col2 abc123 21 40 xyz789 35 50     Search Query: index=xyz sourcetype=abc event_name=test earliest=@d | fields - _raw | eval TIME=strftime(strptime(timestamp,"%Y.%m.%d"),"%F") | fields app_name event_name TIME  values Id | search name=* values="col1" OR values="col2" OR values="col3" OR values="col4" | chart dc(run_Id) OVER name  by values  | fields "APP NAME" col1 col2 col3 col4    And also i want to add one new column: some thing count(Id) as ID_Count by time   I tried usenull, useother, fillnull, none worked.