All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Both my work and myself are new to Splunk and I'm developing some reports and dashboards for one of our applications.  This one dashboard I am working on includes a table of events showing when so... See more...
Both my work and myself are new to Splunk and I'm developing some reports and dashboards for one of our applications.  This one dashboard I am working on includes a table of events showing when some reports are downloaded. The log file's sourcetype is _json_AuditFramework. I'm looking to include the parameters name @documentId and it's corresponding value to place into a table. Right now, the table syntax lists the parameters{}.value and when there's multiple parameters{}.name and parameters{}.values in the log, they will all show in the table.  Pending on the report, I'm including trace information as well and it's the same thing as the parameters. I haven't had luck with similar posts I found.  {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentGetById","commandText":"ref.DocumentGetById","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentId","value":123123}],"serverIPAddress":"100.100.100.100","serverHost":"WEBSERVER","clientIPAddress":"101.101.101.101","sourceSystem":"WebSite","module":"Vendor.PRODUCT.BLL.DocumentManagement","accessDate":"2025-03-06T17:26:47.4112974-07:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.PRODUCT.Web.UI.Website.DocumentManagement.ViewDocument","method":"Page_Load"},{"type":"Manager","class":"Vendor.PRODUCT.BLL.DocumentManagement.DocumentManager","method":"Get"}]} Show syntax highlighted host = WEBSERVER source = Logfile path sourcetype = _json_AuditFramework  
You can create and init block at the top of the dashboard source to set your token on dashboard load <init> <set token="token1">*</set> <set token="token2">*</set> </init>
Hello,  I have a dashboard with couple of pie charts and a summary table. First Pie chart sets token 1, second token 2 and so on. I was wondering if there is any way in which I can refresh the summa... See more...
Hello,  I have a dashboard with couple of pie charts and a summary table. First Pie chart sets token 1, second token 2 and so on. I was wondering if there is any way in which I can refresh the summary table with the token values selected? In other words, if token 1 is set but token 2 is unset, then BASE SEARCH | search token1=$token1$, token2=* (since token 2 is not set yet) If token 1 and token 2 is set, then  BASE SEARCH | search token1=$token1$, token2=$token2$ (as both tokens are set. Eventually what I am trying to do is keep refreshing the dashboard based on user clicks, but if at the moment, unless user sets the token1 and token2, my dashboard panel is not showing anything. It is stuck at "waiting for user input".  
Hi @rrossetti  Try without a named capture group and use $1 instead.  The docs say:   Use $n (for example $1, $2, etc) to specify the output of each REGEX See https://docs.splunk.com/Documentati... See more...
Hi @rrossetti  Try without a named capture group and use $1 instead.  The docs say:   Use $n (for example $1, $2, etc) to specify the output of each REGEX See https://docs.splunk.com/Documentation/Splunk/9.4.1/Admin/Transformsconf for more info.  Will
I need to create an playbook on splunk SOAR to consulting multiples IPs on abuseIP. Actualy, the abuse allows only one ip per consulting.
I am having difficulty converting event logs to metric data points https://docs.splunk.com/Documentation/Splunk/9.4.0/Metrics/L2MOverview According to the documentation, I think I need index-time ex... See more...
I am having difficulty converting event logs to metric data points https://docs.splunk.com/Documentation/Splunk/9.4.0/Metrics/L2MOverview According to the documentation, I think I need index-time extraction to modify the fields in the event as such: raw event examples   server_request_bytes{kafka_id="lkc-j2km8w",principal_id="u-j69zjw",type="Fetch",} 3.14 1736873280000 server_response_bytes{kafka_id="lkc-j2km8w",principal_id="u-j69zjw",type="ApiVersions",} 4.2 1736873280000   My Goal is to parse so that the event has the fields necessary for log to metric conversion. I think that means these are required (in addition to timestamp):   metric_name:server_request_byes numeric_value: 3.14 measurement:server_request_byes=3.14   I have 2 stanzas in transforms.conf which parse the metric name, and the numeric value.    [metric_name] REGEX = ^"(?P<metric_name>[a-z_-]+_[a-z_-]+\w+) FORMAT = metric_name::$metric_name [numeric_value] REGEX = ^[^ \n]* (?P<metric_value>\d+\.\d+) FORMAT = numeric_value::$metric_value   (props.conf looks like this:)   [my_log_to_metrics] # extract metric fields TRANSFORMS-metric_name = metric_name TRANSFORMS-numeric_value = numeric_value category = Log to Metrics # parse timestamp TIME_PREFIX = \}\s.*\s TIME_FORMAT = %s%3N MAX_TIMESTAMP_LOOKAHEAD = 20 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)   Currently, when I try using this sourcetype I see this error message in splunkd log:   Metric event data without a metric name and properly formated numerical values are invalid and cannot be indexed. Ensure the input metric data is not malformed, have one or more keys of the form "metric_name:<metric>" (e.g..."metric_name:cpu.idle") with corresponding floating point values.   (And no metric data in the metrics index) I have a couple of questions: 1. Are the fields metric_name, numeric_value, and measurement required to be extracted at index time with transforms.conf for the log to metric conversion? 2. How can I combine the extracted name and value fields to create the measurement field without writing another regex statement to parse the same thing? 3. How can I parse all of the fields between the curly braces (kafka_id, principal_id, type) as dimensions for the metric, in a generic way?     
There are outputs configured on the box - splunk sends the event to a configured output along with its metadata (destination index among them). --> Here outputs means indexers in my case... HF to IND... See more...
There are outputs configured on the box - splunk sends the event to a configured output along with its metadata (destination index among them). --> Here outputs means indexers in my case... HF to IND and events should take indexers index as destination index. But I have created identical index in HF as well but with different path.
@PickleRick But the metadata in this case is just a text label attached to an event. It doesn't have to correspond to anything on the receiving end.---> can you please brief on this? Metadata means i... See more...
@PickleRick But the metadata in this case is just a text label attached to an event. It doesn't have to correspond to anything on the receiving end.---> can you please brief on this? Metadata means index also right?
Hi @Paaattt  Ye the file is ultimately a text file so you can use any regular text editor to edit and copy the contents into new files. Good point about the encrypted key, Does Kiteworks offer a fi... See more...
Hi @Paaattt  Ye the file is ultimately a text file so you can use any regular text editor to edit and copy the contents into new files. Good point about the encrypted key, Does Kiteworks offer a field for SSL Password (which will be in your UF app). If not you will need to remove the encrpytion from the key before you add it to Kiteworks Use something like this openssl rsa -in encrypted_key.pem -out decrypted_key.pem When you run this command, OpenSSL will prompt you to enter the current password for the private key. After you provide the correct password, it will output the decrypted private key to the specified output file. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Ive had a look through a bunch of the release notes for recent versions and there is no mention of a change in behaviour for this, or it listed as a bug fix, but again, that doesnt mean it has not be... See more...
Ive had a look through a bunch of the release notes for recent versions and there is no mention of a change in behaviour for this, or it listed as a bug fix, but again, that doesnt mean it has not been changed! For the customers I have setup SAML for I've always had make a point of telling them to manage the access via the IdP. Also, dont forget that users wont automatically be deleted if they're removed from the IdP unless authentication extensions is configured. Anyway..back to the issue! Please let us know how you get on.  Glad to hear people using the chargeback app, I used it a couple of years ago and whilst it took a bit of setting up, it was great once in place! Good luck Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I don't understand the phrase "index will be used" in this context. OK. A bit simplified indexing and forwarding process As an event comes in, it arrives on an input. It might have a destination in... See more...
I don't understand the phrase "index will be used" in this context. OK. A bit simplified indexing and forwarding process As an event comes in, it arrives on an input. It might have a destination index already associated with it if it comes from a HF and is already parsed or is explicitly desined for an index by HEC input. Otherwise it will get assigned a destination index either explicitly configured for an input or a default one. That's what's happening on the input side. Now we have two things which can happen in parallel. 1. The event is supposed to be indexed locally (we're on an indexer, indexAndForward=true) - the indexing part of Splunk searches _this instance's_ configuration for an existing index matching the one assigned to the event. If it finds one, it indexes an event into this index. If it doesn't it either puts the index into last chance index if one is configured or drops the event completely (possibly emitting a warning into splunkd.log). 2. There are outputs configured on the box - splunk sends the event to a configured output along with its metadata (destination index among them). But the metadata in this case is just a text label attached to an event. It doesn't have to correspond to anything on the receiving end.
OK. So, assuming you have your TradeNumber and _time extracted, you can do something like this: <your basic search> | eval timesent=if(searchmatch("Trade sent"),_time,null()) | streamstats current... See more...
OK. So, assuming you have your TradeNumber and _time extracted, you can do something like this: <your basic search> | eval timesent=if(searchmatch("Trade sent"),_time,null()) | streamstats current=f window=1 last(timesent) as lastsent | eval delay=lastsent-_time | stats sum(delay) by TradeNumber This creates an additional field and fills it with event's time only when the event is sent. Then the streamstats copies over the value of that field to the next event so you have the "received" event containing both the "sent" time as well as the "received" time (current event's time). Now all that's left is calculate the difference between those two timestamps and sum up your delays.
@kiran_panchavat  Thank you. I have a request in to our Splunk Engineer. I am afraid they are going to tell me what @livehybrid said--that the roles must be mapped by the auth provider. I did l... See more...
@kiran_panchavat  Thank you. I have a request in to our Splunk Engineer. I am afraid they are going to tell me what @livehybrid said--that the roles must be mapped by the auth provider. I did look at your link, and do not see anything related to my concern--as you stated.
@livehybrid  I checked the non-SAML users and I can edit them.  I am waiting on our Splunk Engineer to answer internally, but I think your answer is most plausible. Unfortunate, but plausible. 
Hi @jbeach  In the title you mention SAML Auth - Are your users using SAML to login to Splunk Cloud, if so the role mappings for them should be managed by the authentication provider. Its possible t... See more...
Hi @jbeach  In the title you mention SAML Auth - Are your users using SAML to login to Splunk Cloud, if so the role mappings for them should be managed by the authentication provider. Its possible that a change was made to prevent admins from trying to manually change these role mappings, as they are overwritten when a user logs in to Splunk Cloud. One thing you could check is if you're able to modify the role of any non-SAML based users to rule out other issues. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@jbeach  Did you review this? I don't see any known issues related to your concern. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/ReleaseNotes/Issues  I kindly ask you to submit a Spl... See more...
@jbeach  Did you review this? I don't see any known issues related to your concern. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/ReleaseNotes/Issues  I kindly ask you to submit a Splunk Support ticket.
Now logging includes output group from 9.4 03-06-2025 11:38:16.306 +1300 WARN AutoLoadBalancedConnectionStrategy [199656 TcpOutEloop] - Current dest host connection=10.231.218.59:9997, connid=1,... See more...
Now logging includes output group from 9.4 03-06-2025 11:38:16.306 +1300 WARN AutoLoadBalancedConnectionStrategy [199656 TcpOutEloop] - Current dest host connection=10.231.218.59:9997, connid=1, oneTimeClient=0, _events.size()=53605, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Mar 6 11:35:48 2025 for group=myindexers is using 31391960 bytes. Total tcpout queue size is 31457280. Warningcount=0
Splunk Cloud had an update this past Sunday, 3 Mar 2025. Since then, admins are unable to change a user's role. Is this a bug? We use the Chargeback App, and have it configured to use user roles to ... See more...
Splunk Cloud had an update this past Sunday, 3 Mar 2025. Since then, admins are unable to change a user's role. Is this a bug? We use the Chargeback App, and have it configured to use user roles to delineate charges per team.  
Hi Will, Did you separate them just by a text editor. Or you did additional steps? (e.g passphrase to decrypt the pem file, ssl password if needed , etc)   Thanks, Patrick
This seems to be working in the "Add Data" testing, but after adding this sourcetype when i search i dont see the headers. Actually the header starts on line 2, so i added "File Preamble" to skip the... See more...
This seems to be working in the "Add Data" testing, but after adding this sourcetype when i search i dont see the headers. Actually the header starts on line 2, so i added "File Preamble" to skip the first line and start ingesting the headers from line 2 but it is skipping it. What am i missing?