All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

| eval Status=case(priority="ERROR" AND tracePoint="EXCEPTION" OR message="*Error while processing*","ERROR", priority="WARN","WARN",priority!="ERROR" AND tracePoint!="EXCEPTION" OR message!="*(ERROR... See more...
| eval Status=case(priority="ERROR" AND tracePoint="EXCEPTION" OR message="*Error while processing*","ERROR", priority="WARN","WARN",priority!="ERROR" AND tracePoint!="EXCEPTION" OR message!="*(ERROR):*","SUCCESS") |stats values(Status) as Status by transactionId | eval Status=mvindex(Status, 0)
Take a look at the asset and identity framework documentation https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Addassetandidentitydata Priorities can be assigned through the searches you write ... See more...
Take a look at the asset and identity framework documentation https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Addassetandidentitydata Priorities can be assigned through the searches you write to pull in A&I data or can be derived from network subnets. Typically you may write searches to pull in data from sources and assign priorities based on criteria, such as whether the asset is a production asset, or the identity is a senior manager or a system administrator. This can be based on their job title or group membership.  
Hello, Splunkers! I am learning Splunk ES and trying to understand how urgency value is assigned for notables generated from the correlation searches. I went over this article: How urgency is assi... See more...
Hello, Splunkers! I am learning Splunk ES and trying to understand how urgency value is assigned for notables generated from the correlation searches. I went over this article: How urgency is assigned to notable events in Splunk Enterprise Security - Splunk Documentation  . So, if severity is assigned in the settings of the correlation search, where do we assign the priority to assets? Can someone please explain or provide a documentation page of how this process (assigning priority) is done exactly? Specifically, I would really appreciate if someone could share, where should this be configured, whether on Enterprise Security itself, or elsewhere, is it done through GUI, or it requires manually editing some config files.    Also, a bit stupid question, but, can we also assign priority to identities, for example to indicate higher priority for admin accounts rather than usual accounts.    Thank you for taking your time reading and replying to my post
Maybe you can clarify the use case more?  For example, how do data and model enter Splunk?  Assuming that data are in one set of ingested events (and that your model is about time series), are the pr... See more...
Maybe you can clarify the use case more?  For example, how do data and model enter Splunk?  Assuming that data are in one set of ingested events (and that your model is about time series), are the predictions also in some ingested events?  Or are predictions in some sort of data table?  Or is the model a prescribed mathematical formula from which Splunk is expected to calculate predictions? R2 is nothing but mathematics.  Splunk is not bad at math.  But no, Splunk doesn't have built-in function or command for this. Another possible route is Splunk Machine Learning Tool Kit.  Even though your problem is perhaps not machine learning, mathematics are similar enough.
I obtained an AppDynamics account to install the on-premises AppDynamics platform on a trial basis. However, when I search for Downloads in “AppDynamics and Observability Platform”, the Platform ... See more...
I obtained an AppDynamics account to install the on-premises AppDynamics platform on a trial basis. However, when I search for Downloads in “AppDynamics and Observability Platform”, the Platform is not displayed. I forget what steps I took to register for an account, but maybe it’s because I created an account using a SaaS trial license. Is it possible to install the on-premises AppDynamics platform from this state? Is there no other way but to recreate the account?
Hi @matheusvortex , you could write the results of the two searches in one summary index (called e.g. Notables), adding in each alert all the fields you need and then execute the third alert on the ... See more...
Hi @matheusvortex , you could write the results of the two searches in one summary index (called e.g. Notables), adding in each alert all the fields you need and then execute the third alert on the summary index displaying the fields you need. Ciao. Giuseppe This is the approach of Enterprise Security.
Hi @vineela, have you always the backslashes in your logs? if yes, you should consider them in the regex: in regex101.com https://regex101.com/r/7Fq96D/1 errorCode\s*\=\s*\\\"(?<errorCode>[^\\]+)... See more...
Hi @vineela, have you always the backslashes in your logs? if yes, you should consider them in the regex: in regex101.com https://regex101.com/r/7Fq96D/1 errorCode\s*\=\s*\\\"(?<errorCode>[^\\]+)   but in Splunk you must try: | rex "errorCode\s*\=\s*\\\\\"(?<errorCode>[^\\]+)" Ciao. Giuseppe
Because something is wrong. That's the short and useless answer for a badly asked question. For something more constructive - click on that red exclamation mark and see which checks are failing.
You can't. Even with output_format=hec you can specify some metadata fields like source or sourcetype (which can affect your license usage) but the destination index has to be provided explicitly wit... See more...
You can't. Even with output_format=hec you can specify some metadata fields like source or sourcetype (which can affect your license usage) but the destination index has to be provided explicitly with the collect command invocation.
I found the solution, For those who encounter same problem, you need to restart IIS application to reload CLR, not only the service~
Hi @ejwade, I'm with @bowesmana on this - I don't think it's possible to run | collect with multiple index locations. You could do this instead: | makeresults count=2 | streamstats count | eval ... See more...
Hi @ejwade, I'm with @bowesmana on this - I don't think it's possible to run | collect with multiple index locations. You could do this instead: | makeresults count=2 | streamstats count | eval index = case(count=1, "myindex1", count=2, "myindex2") | appendpipe[| search index="myindex1"| collect index=myindex1] | appendpipe[| search index="myindex2"| collect index=myindex2] You will need an appendpipe command for each index you want to export to, but you should know the destination indexes in advance anyway.
i have a log and i am able to fetch all the codes which is of same format, but not able to fetch logs of one error code: {"stream":"stderr","logtag":"P","log":"10/May/2024:09:31:53 +1000 [dgbttrfr]... See more...
i have a log and i am able to fetch all the codes which is of same format, but not able to fetch logs of one error code: {"stream":"stderr","logtag":"P","log":"10/May/2024:09:31:53 +1000 [dgbttrfr] [correlationId=] [subject=], ERROR au.com.jbjcbdj.o.fefewgr.logging.LoggingUtil - severity = \"ERROR\", DateTimestamp = \"09/May/2024 23:31:53\", errorCode = \"PAY_STAT_ERR_0017\", errorMessage = \"Not able to fetch error\","hostname":"ip-101-156-185.ap-southeast-2.internal","host_ip":"10.56","cluster":"nod/pmn08"} i tried fetching using this :  |rex field=log "errorCode\s=\s*(?<errorCode>[^,\s]+)"and key value pair:|rex field=log "errorCode\s=\s*(?<errorCode>[^,\s]+)" But not able to fetch the values whereas i can `be able to fetch all other` `codes exceopt this. can anyone help. Thanks in Advance
Isn't this a duplicate question answered a week back ? https://community.splunk.com/t5/Splunk-Cloud-Platform/Single-card-value-background-color-change/m-p/685989#M3011
Hi @PATAN, With Dashboard Studio, you can either dynamically color the text OR the background - as far as I know, you can't do both. You could achieve this effect a couple of ways though - create t... See more...
Hi @PATAN, With Dashboard Studio, you can either dynamically color the text OR the background - as far as I know, you can't do both. You could achieve this effect a couple of ways though - create two visualisation panels, one for Dropped, and one for NotDropped, and make them show/hide depending on the value of the token.   Another option (if you are using Absolute mode) is to put a square behind the single value box which colors itself based on the token, and the single value changes the text color based on the token (with a transparent background). Here's some example code: { "visualizations": { "viz_UVeH0JP5": { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_VyZ1EWbM" }, "options": { "majorColor": "> majorValue | matchValue(majorColorEditorConfig)", "backgroundColor": "transparent" }, "context": { "majorColorEditorConfig": [ { "match": "NotDropped", "value": "#2f8811" } ] } }, "viz_eKO2ikid": { "type": "splunk.rectangle", "options": { "fillColor": "> fillDataValue | rangeValue(fillColorEditorConfig)", "rx": 10, "strokeColor": "> strokeDataValue | matchValue(strokeColorEditorConfig)" }, "context": { "fillColorEditorConfig": [ { "value": "#171d21", "to": 100 }, { "value": "#088F44", "from": 100 } ], "fillDataValue": "> primary | seriesByType(\"number\") | lastPoint()", "strokeDataValue": "> primary | seriesByType(\"number\") | lastPoint()", "strokeColorEditorConfig": [ { "match": "Dropped", "value": "#D41F1F" }, { "match": "NotDropped", "value": "#d97a0d" } ] }, "dataSources": { "primary": "ds_dSLmtNBD" } } }, "dataSources": { "ds_VyZ1EWbM": { "type": "ds.search", "options": { "query": "| makeresults\n| eval value=\"$status$\"\n| table value" }, "name": "dummy_search" }, "ds_dSLmtNBD": { "type": "ds.search", "options": { "query": "| makeresults\n| eval value=if(\"$status$\"=\"Dropped\",100,0)\n| table value" }, "name": "background" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_I2IoVEpX": { "options": { "items": [ { "label": "Dropped", "value": "Dropped" }, { "label": "Not Dropped", "value": "NotDropped" } ], "token": "status", "selectFirstSearchResult": true }, "title": "Dropdown Input Title", "type": "input.dropdown" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_eKO2ikid", "type": "block", "position": { "x": 610, "y": 180, "w": 250, "h": 130 } }, { "item": "input_I2IoVEpX", "type": "input", "position": { "x": 630, "y": 70, "w": 198, "h": 82 } }, { "item": "viz_UVeH0JP5", "type": "block", "position": { "x": 610, "y": 180, "w": 250, "h": 130 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "colors" }    
Hi, I am new to AppD. I want to using  Method Invocation Data Collectors to collect data for specific method, show System.Net.Sockets.Socket:DoConnect  in my business transaction snapshot. Here is... See more...
Hi, I am new to AppD. I want to using  Method Invocation Data Collectors to collect data for specific method, show System.Net.Sockets.Socket:DoConnect  in my business transaction snapshot. Here is the configuration: Here is the result: I got nothing in data collector tab. Why? Did I set something wrong? Thanks!  
I don't believe it is possible to do - you can in theory do this index=_audit | head 1 | eval message="hello" | table user action message | collect testmode=f [ | makeresults | fields - _time | eval... See more...
I don't believe it is possible to do - you can in theory do this index=_audit | head 1 | eval message="hello" | table user action message | collect testmode=f [ | makeresults | fields - _time | eval index="main" | format "" "" "" "" "" ""] but you would need for the subsearch to know the index to select and that is run before the outer search, so you can't do what you are trying to do
@Miguel3393 actually if you change that line to  | eval Error_{Codigo_error}=if(in(Codigo_error, "69", "10001", "11"), 1, null()) i.e. replace the final 0 with null() then you will not get all the... See more...
@Miguel3393 actually if you change that line to  | eval Error_{Codigo_error}=if(in(Codigo_error, "69", "10001", "11"), 1, null()) i.e. replace the final 0 with null() then you will not get all the extra columns for other Codigo_error values.
OK, if you want to add in more error code use cases, then change this line | eval Error_{Codigo_error}=if(Codigo_error="69" OR Codigo_error="10001", 1, 0) Change it like to | eval Error_{Codigo_er... See more...
OK, if you want to add in more error code use cases, then change this line | eval Error_{Codigo_error}=if(Codigo_error="69" OR Codigo_error="10001", 1, 0) Change it like to | eval Error_{Codigo_error}=if(in(Codigo_error, "69", "10001", "11"), 1, 0) and add as many as needed 
Thankyou very much for the detailed reply, that gives me enough to action now. I appreciate the contributions to the community in this way.