All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, Thanks for your reply. We already have tested putting this in the props.conf of our search head TA, but this also did not extract the event fields further. Reg the splunkbase TA, I am not su... See more...
Hello, Thanks for your reply. We already have tested putting this in the props.conf of our search head TA, but this also did not extract the event fields further. Reg the splunkbase TA, I am not sure on this. May be I can give it a check.
Apologies I have pasted the log below and just changed the words, hopefully this is easier to work with? The log file starts at "Software Version....." and always ends with the below line at the bot... See more...
Apologies I have pasted the log below and just changed the words, hopefully this is easier to work with? The log file starts at "Software Version....." and always ends with the below line at the bottom of the log "software Completed at 10/05/2024 09:00:06 local time"  Software Version 7.0.1890.0 on server.server.net Entry 6828 starting at 10/05/2024 09:00:01 Starting via software on CustomerDomain ------------------------------------------------------------ Software Version 7.0.1890.0 on sql002 Entry 6828 starting at 10/05/2024 09:00:01 Submitted by software Autosubmit at 10/05/2024 08:00:04 Executing as company\account Starting via software on CustomerDomain Process ID XXXXX ------------------------------------------------------------ Activity: Preparing modules for first use. Current Operation: Status Description: Name Used (GB) Free (GB) Provider Root CurrentLocation ---- --------- --------- -------- ---- --------------- JD software company.company.net 2024-05-10T09:00:05.000Z | INFO | ba9992e7-1681-49b9-b984-711c34f89f4c | SQL002 | file| ICOMcheckfilearrival | Checking for arrival of new file 2024-05-10T09:00:06.000Z | INFO | ba9992e7-1681-49b9-b984-711c34f89f4c | SQL002 | file | ICOMcheckfilearrival | New File has been received. 2024-05-10T09:00:06.000Z | INFO | ba9992e7-1681-49b9-b984-711c34f89f4c | SQL002 | file | ICOMcheckfilearrival | Sync File has been received.   ------------------------------------------------------------ Job Completed at: 10/05/2024 09:00:06 Elapsed Time: 00:00:04.2499362 Kernel mode CPU Time: 00:00:00.5468750 User mode CPU Time: 00:00:00.9531250 Read operation count: 2185 Write operation count: 73 Other operation count: 15510 Read byte count: 5156432 Write byte count: 1688 Other byte count: 205934 Total page faults: 36072 Total process count: 0 Peak process memory: 78073856 Peak job memory: 85004288 ------------------------------------------------------------ ------------------------------------------------------------ Final Status Code: 0, Severity: Success Final Status: The operation completed successfully ------------------------------------------------------------ software Completed at 10/05/2024 09:00:06 local time
Hello, Thanks for your response. I have tried your suggestion on the search head but unfortunately it did not extract the "event" field further.  
Hi Everyone, If I lower the index retention and tell it to use the archive, what happens to the logs with larger retention? Example. We currently have 1 year of retention. If we move to 6 months of ... See more...
Hi Everyone, If I lower the index retention and tell it to use the archive, what happens to the logs with larger retention? Example. We currently have 1 year of retention. If we move to 6 months of retention + 18 of archiving, what happens to logs older than 6 months? 
| eval Status=case(priority="ERROR" AND tracePoint="EXCEPTION" OR message="*Error while processing*","ERROR", priority="WARN","WARN",priority!="ERROR" AND tracePoint!="EXCEPTION" OR message!="*(ERROR... See more...
| eval Status=case(priority="ERROR" AND tracePoint="EXCEPTION" OR message="*Error while processing*","ERROR", priority="WARN","WARN",priority!="ERROR" AND tracePoint!="EXCEPTION" OR message!="*(ERROR):*","SUCCESS") |stats values(Status) as Status by transactionId | eval Status=mvindex(Status, 0)
Take a look at the asset and identity framework documentation https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Addassetandidentitydata Priorities can be assigned through the searches you write ... See more...
Take a look at the asset and identity framework documentation https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Addassetandidentitydata Priorities can be assigned through the searches you write to pull in A&I data or can be derived from network subnets. Typically you may write searches to pull in data from sources and assign priorities based on criteria, such as whether the asset is a production asset, or the identity is a senior manager or a system administrator. This can be based on their job title or group membership.  
Hello, Splunkers! I am learning Splunk ES and trying to understand how urgency value is assigned for notables generated from the correlation searches. I went over this article: How urgency is assi... See more...
Hello, Splunkers! I am learning Splunk ES and trying to understand how urgency value is assigned for notables generated from the correlation searches. I went over this article: How urgency is assigned to notable events in Splunk Enterprise Security - Splunk Documentation  . So, if severity is assigned in the settings of the correlation search, where do we assign the priority to assets? Can someone please explain or provide a documentation page of how this process (assigning priority) is done exactly? Specifically, I would really appreciate if someone could share, where should this be configured, whether on Enterprise Security itself, or elsewhere, is it done through GUI, or it requires manually editing some config files.    Also, a bit stupid question, but, can we also assign priority to identities, for example to indicate higher priority for admin accounts rather than usual accounts.    Thank you for taking your time reading and replying to my post
Maybe you can clarify the use case more?  For example, how do data and model enter Splunk?  Assuming that data are in one set of ingested events (and that your model is about time series), are the pr... See more...
Maybe you can clarify the use case more?  For example, how do data and model enter Splunk?  Assuming that data are in one set of ingested events (and that your model is about time series), are the predictions also in some ingested events?  Or are predictions in some sort of data table?  Or is the model a prescribed mathematical formula from which Splunk is expected to calculate predictions? R2 is nothing but mathematics.  Splunk is not bad at math.  But no, Splunk doesn't have built-in function or command for this. Another possible route is Splunk Machine Learning Tool Kit.  Even though your problem is perhaps not machine learning, mathematics are similar enough.
I obtained an AppDynamics account to install the on-premises AppDynamics platform on a trial basis. However, when I search for Downloads in “AppDynamics and Observability Platform”, the Platform ... See more...
I obtained an AppDynamics account to install the on-premises AppDynamics platform on a trial basis. However, when I search for Downloads in “AppDynamics and Observability Platform”, the Platform is not displayed. I forget what steps I took to register for an account, but maybe it’s because I created an account using a SaaS trial license. Is it possible to install the on-premises AppDynamics platform from this state? Is there no other way but to recreate the account?
Hi @matheusvortex , you could write the results of the two searches in one summary index (called e.g. Notables), adding in each alert all the fields you need and then execute the third alert on the ... See more...
Hi @matheusvortex , you could write the results of the two searches in one summary index (called e.g. Notables), adding in each alert all the fields you need and then execute the third alert on the summary index displaying the fields you need. Ciao. Giuseppe This is the approach of Enterprise Security.
Hi @vineela, have you always the backslashes in your logs? if yes, you should consider them in the regex: in regex101.com https://regex101.com/r/7Fq96D/1 errorCode\s*\=\s*\\\"(?<errorCode>[^\\]+)... See more...
Hi @vineela, have you always the backslashes in your logs? if yes, you should consider them in the regex: in regex101.com https://regex101.com/r/7Fq96D/1 errorCode\s*\=\s*\\\"(?<errorCode>[^\\]+)   but in Splunk you must try: | rex "errorCode\s*\=\s*\\\\\"(?<errorCode>[^\\]+)" Ciao. Giuseppe
Because something is wrong. That's the short and useless answer for a badly asked question. For something more constructive - click on that red exclamation mark and see which checks are failing.
You can't. Even with output_format=hec you can specify some metadata fields like source or sourcetype (which can affect your license usage) but the destination index has to be provided explicitly wit... See more...
You can't. Even with output_format=hec you can specify some metadata fields like source or sourcetype (which can affect your license usage) but the destination index has to be provided explicitly with the collect command invocation.
I found the solution, For those who encounter same problem, you need to restart IIS application to reload CLR, not only the service~
Hi @ejwade, I'm with @bowesmana on this - I don't think it's possible to run | collect with multiple index locations. You could do this instead: | makeresults count=2 | streamstats count | eval ... See more...
Hi @ejwade, I'm with @bowesmana on this - I don't think it's possible to run | collect with multiple index locations. You could do this instead: | makeresults count=2 | streamstats count | eval index = case(count=1, "myindex1", count=2, "myindex2") | appendpipe[| search index="myindex1"| collect index=myindex1] | appendpipe[| search index="myindex2"| collect index=myindex2] You will need an appendpipe command for each index you want to export to, but you should know the destination indexes in advance anyway.
i have a log and i am able to fetch all the codes which is of same format, but not able to fetch logs of one error code: {"stream":"stderr","logtag":"P","log":"10/May/2024:09:31:53 +1000 [dgbttrfr]... See more...
i have a log and i am able to fetch all the codes which is of same format, but not able to fetch logs of one error code: {"stream":"stderr","logtag":"P","log":"10/May/2024:09:31:53 +1000 [dgbttrfr] [correlationId=] [subject=], ERROR au.com.jbjcbdj.o.fefewgr.logging.LoggingUtil - severity = \"ERROR\", DateTimestamp = \"09/May/2024 23:31:53\", errorCode = \"PAY_STAT_ERR_0017\", errorMessage = \"Not able to fetch error\","hostname":"ip-101-156-185.ap-southeast-2.internal","host_ip":"10.56","cluster":"nod/pmn08"} i tried fetching using this :  |rex field=log "errorCode\s=\s*(?<errorCode>[^,\s]+)"and key value pair:|rex field=log "errorCode\s=\s*(?<errorCode>[^,\s]+)" But not able to fetch the values whereas i can `be able to fetch all other` `codes exceopt this. can anyone help. Thanks in Advance
Isn't this a duplicate question answered a week back ? https://community.splunk.com/t5/Splunk-Cloud-Platform/Single-card-value-background-color-change/m-p/685989#M3011
Hi @PATAN, With Dashboard Studio, you can either dynamically color the text OR the background - as far as I know, you can't do both. You could achieve this effect a couple of ways though - create t... See more...
Hi @PATAN, With Dashboard Studio, you can either dynamically color the text OR the background - as far as I know, you can't do both. You could achieve this effect a couple of ways though - create two visualisation panels, one for Dropped, and one for NotDropped, and make them show/hide depending on the value of the token.   Another option (if you are using Absolute mode) is to put a square behind the single value box which colors itself based on the token, and the single value changes the text color based on the token (with a transparent background). Here's some example code: { "visualizations": { "viz_UVeH0JP5": { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_VyZ1EWbM" }, "options": { "majorColor": "> majorValue | matchValue(majorColorEditorConfig)", "backgroundColor": "transparent" }, "context": { "majorColorEditorConfig": [ { "match": "NotDropped", "value": "#2f8811" } ] } }, "viz_eKO2ikid": { "type": "splunk.rectangle", "options": { "fillColor": "> fillDataValue | rangeValue(fillColorEditorConfig)", "rx": 10, "strokeColor": "> strokeDataValue | matchValue(strokeColorEditorConfig)" }, "context": { "fillColorEditorConfig": [ { "value": "#171d21", "to": 100 }, { "value": "#088F44", "from": 100 } ], "fillDataValue": "> primary | seriesByType(\"number\") | lastPoint()", "strokeDataValue": "> primary | seriesByType(\"number\") | lastPoint()", "strokeColorEditorConfig": [ { "match": "Dropped", "value": "#D41F1F" }, { "match": "NotDropped", "value": "#d97a0d" } ] }, "dataSources": { "primary": "ds_dSLmtNBD" } } }, "dataSources": { "ds_VyZ1EWbM": { "type": "ds.search", "options": { "query": "| makeresults\n| eval value=\"$status$\"\n| table value" }, "name": "dummy_search" }, "ds_dSLmtNBD": { "type": "ds.search", "options": { "query": "| makeresults\n| eval value=if(\"$status$\"=\"Dropped\",100,0)\n| table value" }, "name": "background" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_I2IoVEpX": { "options": { "items": [ { "label": "Dropped", "value": "Dropped" }, { "label": "Not Dropped", "value": "NotDropped" } ], "token": "status", "selectFirstSearchResult": true }, "title": "Dropdown Input Title", "type": "input.dropdown" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_eKO2ikid", "type": "block", "position": { "x": 610, "y": 180, "w": 250, "h": 130 } }, { "item": "input_I2IoVEpX", "type": "input", "position": { "x": 630, "y": 70, "w": 198, "h": 82 } }, { "item": "viz_UVeH0JP5", "type": "block", "position": { "x": 610, "y": 180, "w": 250, "h": 130 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "colors" }    
Hi, I am new to AppD. I want to using  Method Invocation Data Collectors to collect data for specific method, show System.Net.Sockets.Socket:DoConnect  in my business transaction snapshot. Here is... See more...
Hi, I am new to AppD. I want to using  Method Invocation Data Collectors to collect data for specific method, show System.Net.Sockets.Socket:DoConnect  in my business transaction snapshot. Here is the configuration: Here is the result: I got nothing in data collector tab. Why? Did I set something wrong? Thanks!