All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi  I would like to display the count of the error code.
I think the approach should be adjusted.  When a user selects 2023, you can always make any value out of it, e.g., "2022, 2023".  Theoretically, you can even use a secondary token setter to calculate... See more...
I think the approach should be adjusted.  When a user selects 2023, you can always make any value out of it, e.g., "2022, 2023".  Theoretically, you can even use a secondary token setter to calculate if the input is free text, not a selector.  Then, you search can simply be index=cls_prod_app appname=Lacerte applicationversion IN ($applicationversion$) message="featureperfmetrics" NOT(isinternal="*") taxmodule=$taxmodule$ $hostingprovider$ datapath=* operation=createclient $concurrentusers$ latest=-365d@d | eval totaltimeinsec = totaltime/1000 | bin span=1m _time | timechart p95(totaltimeinsec) as RecordedTime by applicationversion limit=0 Here is an example in Simple XML for input: <input type="dropdown" token="applicationversion"> <label>Version</label> <choice value="2023,2024">2024</choice> <choice value="2022,2023">2023</choice> ] <prefix> </prefix> <suffix> </suffix> </input>
Hi @jaibalaraman, You can use multiple Sankey visualizations to display a single source-target-value combination, or you can create mock visualizations using boxes, text, and a single-value visualiz... See more...
Hi @jaibalaraman, You can use multiple Sankey visualizations to display a single source-target-value combination, or you can create mock visualizations using boxes, text, and a single-value visualization. In this Splunk 9.3 example, I've used three adjacent boxes, with the center box having 50% transparency. A markdown element is placed over the center box to provide the text, and a single-value element is placed to the right to provide a count. In your case, however, 403120 appears to be an event identifier and not a count. What are you trying to communicate with individual tiles that can't be represented by a Sankey diagram?
It should be set up such that: 1. A search in Splunk Enterprise has fields you find interesting 2. This search is used in the "Splunk App for SOAR Export" to send data to SOAR 3. Each result in yo... See more...
It should be set up such that: 1. A search in Splunk Enterprise has fields you find interesting 2. This search is used in the "Splunk App for SOAR Export" to send data to SOAR 3. Each result in your Splunk search should create an artifact in SOAR, and put them into a SOAR container based on the field configured in the "Splunk App for SOAR Export" to be the grouping field. 4. The artifacts will have CEF fields containing the data of the fields of your Splunk search. Then you can run the playbooks in SOAR on your containers with the artifacts, and the playbooks can run actions using the CEF fields in your artifacts as inputs. Can you confirm that you can view the artifact in SOAR and that it has CEF fields containing your data?
Here is conf presentation about using TLS with splunk https://conf.splunk.com/files/2023/slides/SEC1936B.pdf
In the search query, I am trying to view a csv dataset that shows clusters on a map. I manage to get a visualisation with different sized bubbles based on the values, bigger bubbles for bigger values... See more...
In the search query, I am trying to view a csv dataset that shows clusters on a map. I manage to get a visualisation with different sized bubbles based on the values, bigger bubbles for bigger values. However, once i add it to an existing dashboard, the bubbles disappear. When i navigate to "Data Configurations" -> "Layer Type" to "Marker", now the dashboard has the clusters, however they are markers of the same size instead of bubbles sized to different values.   Here is the source code of my visualisation:  {     "type": "splunk.map",     "options": {         "center": [             1.339638489909646,             103.82878183020011         ],         "zoom": 11,         "baseLayerTileServer": "https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png",         "baseLayerTileServerType": "raster",         "layers": [             {                 "type": "marker",                 "latitude": "> primary | seriesByName('latitude')",                 "longitude": "> primary | seriesByName('longitude')",                 "bubbleSize": "> primary | frameWithoutSeriesNames('_geo_bounds_east', '_geo_bounds_west', '_geo_bounds_north', '_geo_bounds_south', 'latitude', 'longitude') | frameBySeriesTypes('number')",                 "seriesColors": [                     "#7b56db",                     "#cb2196",                     "#008c80",                     "#9d6300",                     "#f6540b",                     "#ff969e",                     "#99b100",                     "#f4b649",                     "#ae8cff",                     "#8cbcff",                     "#813193",                     "#0051b5",                     "#009ceb",                     "#00cdaf",                     "#00490a",                     "#dd9900",                     "#465d00",                     "#ff677b",                     "#ff6ace",                     "#00689d"                 ]             }         ]     },     "dataSources": {         "primary": "ds_TmJ6iHdE"     },     "title": "Dengue Clusters",     "context": {},     "containerOptions": {},     "showProgressBar": false,     "showLastUpdated": false }
@ab73863- Try this approach with sessionKey. Specify Splunk Cloud URL and port number should be 8089 but you can confirm with your Splunk Cloud Representative or Splunk Cloud Support.   I hope... See more...
@ab73863- Try this approach with sessionKey. Specify Splunk Cloud URL and port number should be 8089 but you can confirm with your Splunk Cloud Representative or Splunk Cloud Support.   I hope this helps!!!
Thanks for your answer, let me do it and check if it works.   Also why are you doing ssl on inputs.conf as per docs it should be done on outsputs of HF
@Nawab- The answer is yes, you can setup SSL on both places and you can also set SSL only for HF to Indexer. I think it should not matter what you use from UF to HF and HF to Indexer, they should ac... See more...
@Nawab- The answer is yes, you can setup SSL on both places and you can also set SSL only for HF to Indexer. I think it should not matter what you use from UF to HF and HF to Indexer, they should act independent from each other.   Connection with SSL   inputs.conf --------------- [splunktcp-ssl:9997] serverCert = <string> sslPassword = <string> requireClientCert = <boolean> sslVersions = <string> cipherSuite = <cipher suite string> ecdhCurves = <comma separated list of ec curves> dhFile = <string> allowSslRenegotiation = <boolean> sslQuietShutdown = <boolean> sslCommonNameToCheck = <commonName1>, <commonName2>, ... sslAltNameToCheck = <alternateName1>, <alternateName2>, ... useSSLCompression = <boolean> outputs.conf ------------------ [tcpout] defaultGroup = my_indexers [tcpout:my_indexers] server = <indexer>:9997 sslPassword = password clientCert = $SPLUNK_HOME/etc/auth/server.pem (<Check outputs.conf.spec for other SSL configs)     Connection without SSL   inputs.conf --------------- [splunktcp:9997] outputs.conf ------------------ [tcpout] defaultGroup = my_indexers [tcpout:my_indexers] server = <indexer>:9997     I hope this helps!!!!
Hi @tuts , are you speaking of Enterprise Security? Anyway, if you install the Splunk Security Essentials App (https://splunkbase.splunk.com/app/3435) you have all the available Correlation Searche... See more...
Hi @tuts , are you speaking of Enterprise Security? Anyway, if you install the Splunk Security Essentials App (https://splunkbase.splunk.com/app/3435) you have all the available Correlation Searches and for each one there's s test data set that you can use. Ciao. Giuseppe
We have below deployement, UF ----> HF ----> IDX Uf are sending data to Hf and Hf is acting as and Intermediatry forwarder between UF and IDX. Now we want to do TLS b/w splunk components. can we do... See more...
We have below deployement, UF ----> HF ----> IDX Uf are sending data to Hf and Hf is acting as and Intermediatry forwarder between UF and IDX. Now we want to do TLS b/w splunk components. can we do TLS between HF and IDX and leave UFs. Will UF data will also be TLS complient? If not will UF still sends data to IDXs or we will stop receiving logs all together?
Hello, I am experiencing a periodic issue with smartstore where a bucket will try to be evicted then proceeds to fail and does that cycle thousands of times. The indexer IO is fine, the bucket is ... See more...
Hello, I am experiencing a periodic issue with smartstore where a bucket will try to be evicted then proceeds to fail and does that cycle thousands of times. The indexer IO is fine, the bucket is warm, we have enough cache sizing, and I have not been able to correlate any cache logs with when these failures begin on multiple indexer nodes in the cluster (~33% of indexers). 2 questions: * What is an urgent mode eviction? * What can cause warm buckets to be unable to be evicted when they rolled to warm ~a full day earlier?
Thanks @yuanliu ! I willl try this promptly tomorrow and let you know of the results - will accept as solution if it works:)
I'm not normally one to resurrect dead posts, but as I was myself trying to accomplish the same task and via Google found this post, figured I'd give an update.  Per the documentation for the TA (ht... See more...
I'm not normally one to resurrect dead posts, but as I was myself trying to accomplish the same task and via Google found this post, figured I'd give an update.  Per the documentation for the TA (https://docs.splunk.com/Documentation/AddOns/released/CiscoASA/Releasehistory) they removed the eventgen support in version 3.2.5  
Tom everything seems to be working fine. Your help was crucial in finding the problem. Thank you very much
If all else fails, it's always useful to check job log and see the lispy search. Might not solve the problem but can give valuable insight.
Peace be upon you. I am now running correlation searches and I do not have data to fully test them. I want to activate them in order to protect the company from any attack. I have MITRE ATT&CK Compli... See more...
Peace be upon you. I am now running correlation searches and I do not have data to fully test them. I want to activate them in order to protect the company from any attack. I have MITRE ATT&CK Compliance Security Content But I do not know where to start and how to arrange myself I hope for advice
Well, Splunk can be a bit inconsistent sometimes about using quotes. But when you're referencing something as an argument to the function (or a rvalue in an assignment), double quotes will mean that ... See more...
Well, Splunk can be a bit inconsistent sometimes about using quotes. But when you're referencing something as an argument to the function (or a rvalue in an assignment), double quotes will mean that Splunk will use literal string. So | eval new_value="Posted Transaction Date" Would yield a literal string, not the field contents. (Same with strptime arguments). But yes, in other places that can be a bit unobvious which form to use at any given point.
Hi Joe, there is a command documentation in default/searchbnf.conf [curl-command] syntax = CURL [choice:URI=<uri> OR URIFIELD=<urifield>] [optional: METHOD=<GET|PATCH|POST|PUT|DELETE> VERIFYSSL=<... See more...
Hi Joe, there is a command documentation in default/searchbnf.conf [curl-command] syntax = CURL [choice:URI=<uri> OR URIFIELD=<urifield>] [optional: METHOD=<GET|PATCH|POST|PUT|DELETE> VERIFYSSL=<TRUE|FALSE> DATAFIELD=<field_name> DATA=<data> HEADERFIELD=<json_header_field_name> HEADERS=<json_header> USER=<user> PASS=<password> DEBUG=<true|false> SPLUNKAUTH=<true|false> SPLUNKPASSWDNAME=<username_in_passwordsconf> SPLUNKPASSWDCONTEXT=<appcontext (optional)> TIMEOUT=<float>] -k = "VERIFYSSL=FALSE" headers="{\"content-type\":\"application/json\"}" best regards, Andreas
Hi @chimuru84 , sorry, I  ,isunderstood yur requirement! let me understand: you want to know the users connected to a third party authentication in the last hour that didn't do another connection i... See more...
Hi @chimuru84 , sorry, I  ,isunderstood yur requirement! let me understand: you want to know the users connected to a third party authentication in the last hour that didn't do another connection in the last year but they did before, is it correct? at first: how long do you want to run your check: two years? Then, when you say "authentication at the moment", are you meaning in the last hour or what else? With the above hypotesis So, please try this: index=...... earliest=-2y latest=-h [ search index=...... earliest=-h latest=now | dedup id | fields id ] | eval period=if(_time>now()-31536000, "last Year","Previous Year") | stats dc(Period) AS Period_count values(Period) AS Period BY id | where Period_count=1 AND Period!="Previous Year" | table id In ths way, you have yje users connected in the last hour  that did the last connection (except the last hour) more than one year. If you need a different condition, you can use my approach. Ciao. Giuseppe