All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, I'm working hard to create a SIEM dashboard that has the AH list: higher priority :1)ab 2)CD 3)if 4)GH rest of the AH: 5)IJ 6)kl 7)MN for each of these systems, I need a list of hosts as... See more...
Hi All, I'm working hard to create a SIEM dashboard that has the AH list: higher priority :1)ab 2)CD 3)if 4)GH rest of the AH: 5)IJ 6)kl 7)MN for each of these systems, I need a list of hosts associated with the AH and what is currently being ingested from the AH.  
Hi @davilov, Here's a way I've found to hide/show a panel based on a dropdown. It depends on 3 steps: Define a dropdown with options for each panel you'd like to show/hide, in this example I've c... See more...
Hi @davilov, Here's a way I've found to hide/show a panel based on a dropdown. It depends on 3 steps: Define a dropdown with options for each panel you'd like to show/hide, in this example I've called the token "show_panel", and we choose to show/hide two panels or show them all. Set your panel visualisations to hide when there is no data, under the "Visibility" setting:   Update the searches for your visualisations to compare a known string (i.e. the possible token values) to the current token value:   ``` I only want to show this panel if we have selected "Bar Chart" from the drop down:``` | eval _show="Bar Chart" | search _show="$show_panel$" | fields - _show​   You can get a bit fancier by creating chain searches to compare the text so that the search doesn't rerun every time you change the dropdown.   Here's a sample dashboard:   { "visualizations": { "viz_QNQd730H": { "type": "splunk.table", "title": "Table of data", "dataSources": { "primary": "ds_BGrBVi8Q" }, "hideWhenNoData": true }, "viz_JM2qhOeK": { "type": "splunk.bar", "title": "Bar Chart", "dataSources": { "primary": "ds_KD6bNQc9" }, "options": { "xAxisTitleText": "Time", "xAxisLineVisibility": "show", "yAxisTitleText": "Score", "yAxisLineVisibility": "show", "yAxisMajorTickVisibility": "show", "yAxisMinorTickVisibility": "show" }, "hideWhenNoData": true } }, "dataSources": { "ds_BGrBVi8Q": { "type": "ds.search", "options": { "query": "| windbag\n| table source, sample, position\n| eval _show=\"Table\"\n| search _show=\"$show_panel$\"\n| fields - _show" }, "name": "table_search" }, "ds_KD6bNQc9": { "type": "ds.search", "options": { "query": "| gentimes start=-7\n| eval score=random()%500\n| eval _time = starttime\n| timechart avg(score) as score\n| eval _show=\"Bar Chart\"\n| search _show=\"$show_panel$\"\n| fields - _show" }, "name": "barchart" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_hs0qamAf": { "options": { "items": [ { "label": "All", "value": "*" }, { "label": "Bar Chart", "value": "Bar Chart" }, { "label": "Table", "value": "Table" } ], "defaultValue": "*", "token": "show_panel" }, "title": "Choose you panel", "type": "input.dropdown" } }, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_QNQd730H", "type": "block", "position": { "x": 0, "y": 0, "w": 720, "h": 400 } }, { "item": "viz_JM2qhOeK", "type": "block", "position": { "x": 720, "y": 0, "w": 720, "h": 400 } } ], "globalInputs": [ "input_global_trp", "input_hs0qamAf" ] }, "description": "https://community.splunk.com/t5/Dashboards-Visualizations/Conditionally-show-hide-panels-based-on-dropdown-selection-in/m-p/686803#M56222", "title": "Splunk Answers Post" }    
I'm having the same exact issue ... data looks fine in the event, looks fine in even a classic dashboard stats table, but in DS it's all messed up. Adding a space before the string doesn't fix it eit... See more...
I'm having the same exact issue ... data looks fine in the event, looks fine in even a classic dashboard stats table, but in DS it's all messed up. Adding a space before the string doesn't fix it either. =\ This column just has data in it's that's a combo of letters and numbers, for example: 00882b87 There are times where all the leading zeroes are not there, other times where they are. There doesn't seem to be a pattern here. All I know is that it makes a day's worth of effort building this dashboard a complete waste of time.
This can tell you if the user's first login is the same as his last - hopefully this will give you some pointers index=data earliest=-30d | bin _time span=1d | stats count by _time user | eventstat... See more...
This can tell you if the user's first login is the same as his last - hopefully this will give you some pointers index=data earliest=-30d | bin _time span=1d | stats count by _time user | eventstats min(_time) as first max(_time) as last by user | where first = last  
I would take issue with some of the statements as "best practice" for logging standards. We often find developer friendly formats, such as JSON cause large ingestion volumes compared to the value of ... See more...
I would take issue with some of the statements as "best practice" for logging standards. We often find developer friendly formats, such as JSON cause large ingestion volumes compared to the value of the data contained in the JSON. The ratio of field names to usable field values can typically be 50% and often developer logging frameworks will just dump out JSON objects with empty field values, which is a real cost. I often see clients hitting their ingestion licence limits then having to push back to developers who have written dashboards on their data, asking them to shrink their data. Anyway, as to your question, if you want to count how many of CategoryA are true and how many false, if false is not written, you can only extrapolate the false count to be the total count - true count, on the assumption that all events are implicitly false. Therefore you need to know the data to be able to make those searches. It's fine to have things like cat_a=true or categorya=1 - however, if you have 100 million events per day, then use =1, not =true, so you save 300MB/day ingestion cost  also mapping a "true" to something you can count on is more expensive instead of doing this simple wildcarding logic of | stats sum(cat_*) as cat_* if you have predictable naming conventions. Please also do not write full Java class names in the logs, e.g org.apache.catalina.bla.bla.bla as this has no value and just costs in licence ingest. Most logging frameworks have the ability to abbreviate package names to a single character and there is rarely ambiguity in class names.
Here is my example search to start... index=data | timechart span=1d by user Now, I am trying to build out so the last 30 days I can get a count of new users that has not been seen on previous da... See more...
Here is my example search to start... index=data | timechart span=1d by user Now, I am trying to build out so the last 30 days I can get a count of new users that has not been seen on previous days.  Tried some bin options and something like this but no joy.  index=data | stats min(_time) as firstTime by user | eval isNew=if(strftime(firstTime, "%Y-%m-%d") == strftime(_time, "%Y-%m-%d"), 1, 0) | where isNew=1 Any help?   
A lot to unpack here, but please whenever you post SPL, please put it in a code block using the </> icon in the UI Firstly, you have a number of challenging commands, appendcols, dedup, sort Based ... See more...
A lot to unpack here, but please whenever you post SPL, please put it in a code block using the </> icon in the UI Firstly, you have a number of challenging commands, appendcols, dedup, sort Based on your use of sort 100000 it implies you have a reasonable volume of data. If you have your first search that returns 3 results and you then have appendcols that returns 2 or 4 or NOT 3 or an ANY different order then the columns will not align. Using sort early on is a bad choice, it will cause performance issues and if you have more than 100000 items, they will be truncated, so can also caused problems with your appencols if truncation occurs. Your first search logic could be changed to more efficient with index=hum_stg_app "msg.OM_MsgType"=REQUEST msg.OM_Body.header.transactionId=* "msg.service_name"="fai-np-notification" "msg.OM_Body.header.templateType"=vsf_device_auth_otp_template "msg.OM_Body.header.channelType{}"=sms "msg.OM_Body.header.organization"=VSF | rename msg.OM_Body.header.transactionId as transactionId | stats earliest(_time) as Time1 count by transactionId | eval lenth=len(transactionId) | where length=40 | eval Request_time=strftime(Time1,"%y-%m-%d %H:%M:%S") which I believe is doing what you are trying to do. The same principle applies to the second search. Is your time range in the appendcols search the same as the outer search? Is transactionId from the first search supposed to be the same as transactionId_request? You can probably combine these to a single search, but if these two transaction Ids are the same, you would be safer using append rather than appendcols and then doing a final stats by common_tx_id to join the two data sets together. Can you give more detail on how they are different - and when diagnosing these, find a small data set where you can reproduce the issue.  
Add in <change> <set token="selected_label">$label$</set> </change>
Ok. Back up a little. You have a file. It's supposed to be a certificate (possibly with a certificate chain from trusted rootCA). How did you get it? Did you send someone a CSR to obtain a cert? Did... See more...
Ok. Back up a little. You have a file. It's supposed to be a certificate (possibly with a certificate chain from trusted rootCA). How did you get it? Did you send someone a CSR to obtain a cert? Did you just get a cert because you mailed/called/faxed/whatever someone and told them "hey, we need a cert"? And the most important question here is - do you have a private key corresponding to that cert?
How to fetch the fieldForLabel value using token(option). i have to pass fieldForLabel to query <input type="dropdown" token="option"> <label>Choose from options</label> <fieldForLabel>TEST</f... See more...
How to fetch the fieldForLabel value using token(option). i have to pass fieldForLabel to query <input type="dropdown" token="option"> <label>Choose from options</label> <fieldForLabel>TEST</fieldForLabel> <fieldForValue>aaa</fieldForValue> <search> <query> | inputlookup keyvalue_pair.csv | dedup TEST | sort TEST | table TEST aaa </query> </search> </input>  
Hi @bowesmana  Thanks a lot!!  You rock!!   I did make attempt on using evenstats, but then It didn't work because of  if condition didn't work.  It turns out I had to use a match command.    I ap... See more...
Hi @bowesmana  Thanks a lot!!  You rock!!   I did make attempt on using evenstats, but then It didn't work because of  if condition didn't work.  It turns out I had to use a match command.    I appreciate your help.
My post can be disregarded,  simple misinformation and not checking what/where people were running their field extractions.  (App vs Global permissions on Field and Transform extractions). Cheers non... See more...
My post can be disregarded,  simple misinformation and not checking what/where people were running their field extractions.  (App vs Global permissions on Field and Transform extractions). Cheers nontheless and thanks for the pointers
already tried, it's not working. i reinstalled the whole Splunk once again just to make sure if I am doing anything wrong, but nothing worked.
server.conf - --> enableSplunkdSSL = true --> sslRootCAPath = path of root.pem file --> serverCert = path of server.pem file --> sslPassword = <mypassword> --> sslVersions = *,-ssl2   web.conf... See more...
server.conf - --> enableSplunkdSSL = true --> sslRootCAPath = path of root.pem file --> serverCert = path of server.pem file --> sslPassword = <mypassword> --> sslVersions = *,-ssl2   web.conf -  --> sslVersions = *,-ssl2 --> sslPassword = <mypassword>
Thanks for the information. I tried all the possible ways; SSL is not getting configured. Opened another case with Splunk, they are looking. even if we are trying to configure it from scratch as per ... See more...
Thanks for the information. I tried all the possible ways; SSL is not getting configured. Opened another case with Splunk, they are looking. even if we are trying to configure it from scratch as per the documentation, its not working. Splunk has asked for the diag file, and we have shared it to them but no response on it yet.
index=_internal source=*splunkd.log* host=<all indexer hosts> bucketreplicator full earliest=-15m | stats count dc(host) as num_indexer_blocked_by_peer by peer | where num_indexer_blocked_by_peer > ... See more...
index=_internal source=*splunkd.log* host=<all indexer hosts> bucketreplicator full earliest=-15m | stats count dc(host) as num_indexer_blocked_by_peer by peer | where num_indexer_blocked_by_peer > 0 AND count > 0 | join type=left peer [ search index=_introspection host=<all indexer hosts> hostwide earliest=-10m | stats values(data.instance_guid) as peer by host]
Hi, A couple of notes regarding Network Explorer. The networkExplorer data collection was deprecated in the v0.88.0 Splunk helm chart. That said, the interface from the infrastructure navigator is s... See more...
Hi, A couple of notes regarding Network Explorer. The networkExplorer data collection was deprecated in the v0.88.0 Splunk helm chart. That said, the interface from the infrastructure navigator is still available if you ingest networkExplorer data (e.g., tcp.bytes). To ingest this data, you'll probably want to consider the upstream eBpf helm chart along with the OTel collector running as a gateway. This link may help: https://docs.splunk.com/observability/en/infrastructure/network-explorer/network-explorer-setup.html#migrate-from-networkexplorer-to-ebpf-helm-chart
I was able to check _internal and found  "SSLError(MaxRetryError("HTTPSConnectionPool(host='redacted.host.com', port=XXX): Max retries exceeded with url: /rest/token (Caused by SSLError(SSLError(1, '... See more...
I was able to check _internal and found  "SSLError(MaxRetryError("HTTPSConnectionPool(host='redacted.host.com', port=XXX): Max retries exceeded with url: /rest/token (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1106)')))")). I have verify SSL set to false in the tenable_consts.py file so I am not sure if that has bearing ...*update* it does not still getting the same error. Any ideas?
The registration of the DLL worked for us.
index=hum_stg_app "msg.OM_MsgType"=REQUEST msg.OM_Body.header.transactionId=* "msg.service_name"="fai-np-notification" "msg.OM_Body.header.templateType"=vsf_device_auth_otp_template "msg.OM_Body.head... See more...
index=hum_stg_app "msg.OM_MsgType"=REQUEST msg.OM_Body.header.transactionId=* "msg.service_name"="fai-np-notification" "msg.OM_Body.header.templateType"=vsf_device_auth_otp_template "msg.OM_Body.header.channelType{}"=sms "msg.OM_Body.header.organization"=VSF | rename msg.OM_Body.header.transactionId as transactionId | eval lenth=len(transactionId) |sort 1000000 _time | dedup transactionId _time | search lenth=40 | rename _time as Time1 | eval Request_time=strftime(Time1,"%y-%m-%d %H:%M:%S") | stats count by Time1 transactionId Request_time | appendcols [| search index=hum_stg_app earliest=-30d fcr-np-sms-gateway "msg.service_name"="fcr-np-sms-gateway" "msg.TransactionId"=* "msg.NowSMSResponse"="{*Success\"}" | rename "msg.TransactionId" as transactionId_request|sort 1000000 _time | dedup transactionId_request _time |eval Time=case(like(_raw,"%fcr-np-sms-gateway%"),_time) | eval lenth=len(transactionId_request) | search lenth=40 | dedup transactionId_request | stats count by transactionId_request Time ] | eval Transaction_Completed_time=strftime(Time,"%y-%m-%d %H:%M:%S") | eval Time_dif=Time-Time1 | eval Time_diff=(Time_dif)/3600 | fields transactionId transactionId_request Request_time Transaction_Completed_time count Time_diff Request_time Time Time1 #getting wrong value in Transaction_Completed_time.