All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, Please check the below data : ERROR 2024-08-09 14:19:22,707 email-slack-notification-impl-flow.BLOCKING @3372f96f] [processor: email-slack-notification-impl-flow/processors/2/rout... See more...
Hello everyone, Please check the below data : ERROR 2024-08-09 14:19:22,707 email-slack-notification-impl-flow.BLOCKING @3372f96f] [processor: email-slack-notification-impl-flow/processors/2/route/0/processors/0; event: 5-03aca501-42b3-11ef-ad89-0a2944cc61cb] error.notification.details: { "correlationId" : "5-03aca501-42b3-11ef-ad89-0a2944cc61cb", "message" : "Error Details", "tracePoint" : "FLOW", "priority" : "ERROR", } ERROR 2024-08-09 14:19:31,389 email-slack-notification-impl-flow.BLOCKING @22feab4f] [processor: email-slack-notification-impl-flow/processors/2/route/0/processors/0; event: 38de9c30-49eb-11ef-8a9e-02cfc6727565] error.notification.details: { "correlationId" : "38de9c30-49eb-11ef-8a9e-02cfc6727565", "message" : "Error Details", "priority" : "ERROR", } The above 2 blocks of data are coming as one event but I want them to be 2 events each starting from keyword "Error". Below is my props.config entry for same but not working: applog_test] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom disabled = false pulldown_type = true BREAK_ONLY_BEFORE = date SHOULD_LINEMERGE = true TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N TIME_PREFIX=ERROR\s+ Please help how to fix this. Thanks in advance!    
Hi Splunkers, I am monitoring my websites using Splunk website monitoring, I have configured an alert which sends me email alert whenever my website goes down or takes time to response. Now I want t... See more...
Hi Splunkers, I am monitoring my websites using Splunk website monitoring, I have configured an alert which sends me email alert whenever my website goes down or takes time to response. Now I want that whenever my website came back UP again or functions normally then I should receive and alert email also to notify me that website is working fine now. Could you please shower you knowledge here and help me to setup this alert.  TIA.
I am trying to disable the Splunk Secure Gateway app in a clustered environment. However I dont see an option to disable the app in Apps -> Manage Apps. It only displays the current status of the app... See more...
I am trying to disable the Splunk Secure Gateway app in a clustered environment. However I dont see an option to disable the app in Apps -> Manage Apps. It only displays the current status of the app, which is "Active". I also tried the same in a single node installation, where there is an option to disable the app just next to its current status in the same menu, i.e. Apps -> Manage Apps.   So, how can I disable the Splunk Secure Gateway in the clustered environment ?
Hi guys , i wanted to see predictive monitoring of in ITSI product, how can i see the free tour of it kindly help me please.
We have enabled On Demand Capture Session for capturing the memory leaks on of our node. After the session ends, we are unable to see the detection dashboard   
Hi All, i need to consolidate / correlate data from 2 different indexes as explained below. I have gone thru multiple posts on this forum from experts relevant to this but somehow for my use case, t... See more...
Hi All, i need to consolidate / correlate data from 2 different indexes as explained below. I have gone thru multiple posts on this forum from experts relevant to this but somehow for my use case, the same query ain't working. I have below situation: In Index=windows , the field "host" contains all the different hosts sending logs to Splunk. For example: Host01, Host02 etc. In another index=cmdb, the field "dv_name" contain the same hostnames sending logs.   Also, there are other fields like dv_status and dv_os in this index which i need to be part of final output So as explained above,  the common link is the host field, its name is different across the 2 index, but the values are same.   When i run the following 2 queries to get my expected output, it only pulls data from windows index. It completely avoids the other cmdb index, irrespective of the fact the cmdb index has data / events from same hosts in the time range whatever i select.     (index=windows) OR (index=cmdb sourcetype="snow:cmdb_ci_server" dv_name=*) | eval asset_name=coalesce(dv_name, host) | stats dc(index) as idx_count, values(index) values(dv_os), values(dv_install_status) by asset_name     Output it it showing:   asset_name idx_count index dv_os dv_status Host01 1 windows     Host02 1 windows       Expected output asset_name idx_count index dv_os dv_install_status Host01 2 windows, cmdb Windows Server Production Host02 2 windows, cmdb Windows Server Test
Hi  Can we create widgets that display the Drive utilized in Volume like MyComputer? I have to create a dashboard like the one above for separate partitions. Let me know if it is possible ... See more...
Hi  Can we create widgets that display the Drive utilized in Volume like MyComputer? I have to create a dashboard like the one above for separate partitions. Let me know if it is possible Thanks ^ Post edited by @Ryan.Paredez. Split the post into a new one and updated the subject. 
Hi ,   I am new to Spunk just got Free Cloud Trial. I did the followings : 1- Logged in to Cloud trial instance 2- Created Index name winpc   3- App > Univeral forwarded and downloaded on Win PC... See more...
Hi ,   I am new to Spunk just got Free Cloud Trial. I did the followings : 1- Logged in to Cloud trial instance 2- Created Index name winpc   3- App > Univeral forwarded and downloaded on Win PC 4- Installed Forwarded on WInPC during step on use this agent with selected use with cloud instance 5- Receiver index left blank had no idea about my splun instance FQDN /IP 6- Checked services Splunk universal forwarded service running as Logon As Local system Issues : 1- No Logs I can see into index winpc created after waiting a hour or so 2- How can I tell forwarded to forward win and sysmon logs too should I edit inputs.conf file ?   Kindly guide and help so that I may get logs and learn any further .   Regards  
Hi,  I have previously had Splunk Dev license which I use for testing. As my license expired, I requested for a new one. It's been more that 3 weeks, yet my request is still pending. Any help is ap... See more...
Hi,  I have previously had Splunk Dev license which I use for testing. As my license expired, I requested for a new one. It's been more that 3 weeks, yet my request is still pending. Any help is appreciated.    Thanks    
In the search query, I am trying to view a csv dataset that shows clusters on a map. I manage to get a visualisation with different sized bubbles based on the values, bigger bubbles for bigger values... See more...
In the search query, I am trying to view a csv dataset that shows clusters on a map. I manage to get a visualisation with different sized bubbles based on the values, bigger bubbles for bigger values. However, once i add it to an existing dashboard, the bubbles disappear. When i navigate to "Data Configurations" -> "Layer Type" to "Marker", now the dashboard has the clusters, however they are markers of the same size instead of bubbles sized to different values.   Here is the source code of my visualisation:  {     "type": "splunk.map",     "options": {         "center": [             1.339638489909646,             103.82878183020011         ],         "zoom": 11,         "baseLayerTileServer": "https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png",         "baseLayerTileServerType": "raster",         "layers": [             {                 "type": "marker",                 "latitude": "> primary | seriesByName('latitude')",                 "longitude": "> primary | seriesByName('longitude')",                 "bubbleSize": "> primary | frameWithoutSeriesNames('_geo_bounds_east', '_geo_bounds_west', '_geo_bounds_north', '_geo_bounds_south', 'latitude', 'longitude') | frameBySeriesTypes('number')",                 "seriesColors": [                     "#7b56db",                     "#cb2196",                     "#008c80",                     "#9d6300",                     "#f6540b",                     "#ff969e",                     "#99b100",                     "#f4b649",                     "#ae8cff",                     "#8cbcff",                     "#813193",                     "#0051b5",                     "#009ceb",                     "#00cdaf",                     "#00490a",                     "#dd9900",                     "#465d00",                     "#ff677b",                     "#ff6ace",                     "#00689d"                 ]             }         ]     },     "dataSources": {         "primary": "ds_TmJ6iHdE"     },     "title": "Dengue Clusters",     "context": {},     "containerOptions": {},     "showProgressBar": false,     "showLastUpdated": false }
We have below deployement, UF ----> HF ----> IDX Uf are sending data to Hf and Hf is acting as and Intermediatry forwarder between UF and IDX. Now we want to do TLS b/w splunk components. can we do... See more...
We have below deployement, UF ----> HF ----> IDX Uf are sending data to Hf and Hf is acting as and Intermediatry forwarder between UF and IDX. Now we want to do TLS b/w splunk components. can we do TLS between HF and IDX and leave UFs. Will UF data will also be TLS complient? If not will UF still sends data to IDXs or we will stop receiving logs all together?
Hello, I am experiencing a periodic issue with smartstore where a bucket will try to be evicted then proceeds to fail and does that cycle thousands of times. The indexer IO is fine, the bucket is ... See more...
Hello, I am experiencing a periodic issue with smartstore where a bucket will try to be evicted then proceeds to fail and does that cycle thousands of times. The indexer IO is fine, the bucket is warm, we have enough cache sizing, and I have not been able to correlate any cache logs with when these failures begin on multiple indexer nodes in the cluster (~33% of indexers). 2 questions: * What is an urgent mode eviction? * What can cause warm buckets to be unable to be evicted when they rolled to warm ~a full day earlier?
Peace be upon you. I am now running correlation searches and I do not have data to fully test them. I want to activate them in order to protect the company from any attack. I have MITRE ATT&CK Compli... See more...
Peace be upon you. I am now running correlation searches and I do not have data to fully test them. I want to activate them in order to protect the company from any attack. I have MITRE ATT&CK Compliance Security Content But I do not know where to start and how to arrange myself I hope for advice
We are using Splunk cloud in our enterprise and as part of an automation project we want programatic way for doing Splunk search. Based on Splunk website we found that there is node module splunk-sdk... See more...
We are using Splunk cloud in our enterprise and as part of an automation project we want programatic way for doing Splunk search. Based on Splunk website we found that there is node module splunk-sdk (https://www.npmjs.com/package/splunk-sdk) using which we can access Splunk even though the module is not mentioning explicitly anything about Splunk cloud.  Following is the code we attempted but its failing to connect. Would like to know if any special configuration needs to be done in order to achieve the connection.   (async=>{ let splunkjs = require('splunk-sdk'); let service = new splunkjs.Service({username: "myusername", password: "***"}); async function myFunction() { try { await service.login(); console.log("Login was successful: " + success); let jobs = await jobs.fetch(); let jobList = jobs.list(); for(let i = 0; i < jobList.length; i++) { console.log("Job " + i + ": " + jobList[i].sid); } } catch(err) { console.log(err); }   Following is the error we are getting. Please help in understanding and resolving this issue if anyone has encountered the same issue.
Hi team, New user here.  I was going through https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Admin/ConfigureIPAllowList I have the sc_admin role, I have also enabled token authenticatio... See more...
Hi team, New user here.  I was going through https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Admin/ConfigureIPAllowList I have the sc_admin role, I have also enabled token authentication and my splunk cloud version is greater then 8.2.2201  I wanted to add certain IP address to allow list, However I don't see the option to add IP address (screenshot attached)
I have reviewed the curl command syntax in the details section of the Add-on download page but was not able to discern how pass the following to the "| curl" command 1) How can I pass the equivalen... See more...
I have reviewed the curl command syntax in the details section of the Add-on download page but was not able to discern how pass the following to the "| curl" command 1) How can I pass the equivalent of:   '-k" or "--insecure'  ? 2) How do I pass 2 headers in the same command line ?  From the LINUX prompt, my command looks like this:    curl -X POST -H "Content-Type: application/json" -H "UUID: e42eed31-65bb-4283-ad05-33f18da75513" -k "https://abc.com/X1"  -d "{ lots of data }"
I am trying to extract fields for this custom data but unable to parse the data | extract kv pairdelim="  " kvdelim=" _: " Log _: Alert _: 2024-07-28T15:00:00-06:01 _: 1234 _: duckcreek.medpro.... See more...
I am trying to extract fields for this custom data but unable to parse the data | extract kv pairdelim="  " kvdelim=" _: " Log _: Alert _: 2024-07-28T15:00:00-06:01 _: 1234 _: duckcreek.medpro.com _: QA _: EDW Policy _: 1.2 _: 600123568 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicatble Tax Rate _: data _: testing Log _: Alert _: 2024-07-28T15:00:00-06:01 _: 1234 _: duckcreek.medpro.com _: QA _: EDW Policy _: 1.2 _: 600123568 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicatble Tax Rate _: data _: testing Log _: Inform _: 2024-07-28T15:00:00-06:01 _: 1234 _: duckcreek.medpro.com _: QA _: EDW Policy _: 1.2 _: 600123568 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicatble Tax Rate _: data _: testing Log _: Inform _: 2024-07-28T15:00:00-06:01 _: 1234 _: duckcreek.medpro.com _: QA _: EDW Policy _: 1.2 _: 600123569 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicatble Tax Rate _: data _: testing Log _: Inform _: 2024-07-28T15:00:00-06:01 _: 123456789 _: duckcreek.medpro.com _: UAT _: EDW Policy _: 1.2 _: 600123570 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicable Tax Rate _: data _: testing Log _: Alert _: 2024-07-28T15:00:00-06:01 _: 123456789 _: duckcreek.medpro.com _: NFT2 _: EDW Policy _: 1.2 _: 600123571 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicable Tax Rate Info _: data _: testing Log _: Inform _: 2024-07-28T15:00:00-06:01 _: 12345 _: duckcreek.medpro.com _: UAT _: EDW Policy _: 1.2 _: 600123570 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicable Tax Rate _: data _: testing Log _: Alert _: 2024-07-28T15:00:00-06:01 _: 12345 _: duckcreek.medpro.com _: NFT2 _: EDW Policy _: 1.2 _: 600123571 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicable Tax Rate Info _: data _: testing Log _: Inform _: 2024-07-28T15:00:00-06:01 _: 1234 _: duckcreek.medpro.com _: UAT _: EDW Policy _: 1.2 _: 600123570 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicable Tax Rate _: data _: testing Log _: Alert _: 2024-07-28T15:00:00-06:01 _: 1234 _: duckcreek.medpro.com _: NFT2 _: EDW Policy _: 1.2 _: 600123571 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicable Tax Rate Info _: data _: testing Log _: Inform _: 2024-07-28T15:00:00-06:01 _: 1234 _: duckcreek.medpro.com _: QA _: EDW Policy _: 1.2 _: 600123570 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicable Tax Rate _: data _: testing Log _: Alert _: 2024-07-28T15:00:00-06:01 _: 1234 _: duckcreek.medpro.com _: QA _: EDW Policy _: 1.2 _: 600123571 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicable Tax Rate Info _: data _: testing Log _: Inform _: 2024-07-28T15:00:00-06:01 _: 1234 _: duckcreek.medpro.com _: QA _: EDW Policy _: 1.2 _: 600123568 _: Quote Intake from Outlook _: intake_quote _: CreateCalculation.cpp _: CalculateTaxRate _: true _: Identify Applicatble Tax Rate _: data _: testing Any help would be appreciated . can someone please help me with parsing from search or command line using props
  Hi All, Httpevent collector logs in to splunk, not showing the host,source,sourcetype in splunk, please find the below screen shot, please help me.    
Hi All, I am trying to create a scatter dashboard or similar in Dashboard Studio to show debit transaction amounts over time. A query like this works well in Search, but translates poorly to the ... See more...
Hi All, I am trying to create a scatter dashboard or similar in Dashboard Studio to show debit transaction amounts over time. A query like this works well in Search, but translates poorly to the dashboard: source="Transaction File.csv" "Debit Amount"="*" | stats values("Debit Amount") BY "Posted Transactions Date" I am aware I likely need to convert the the date from string format to date format within my search, something to the effect of:  |eval Date = strptime("Posted Transactions Date","%d/%m/%y") But I am struggling to get the final result. I have also played around with using the _time field instead of Posted Transaction Date  field and with timecharts without success which I think is likely also a formatting issue.  Eg:  source=source="Transaction File.csv" | timechart values("Debit Amount")   As there are multiple debit amount values per day in some cases, I would ideally like a 2nd similar dashboard that sums these debits per day instead of showing them as individual values whilst also removing 1 outlier debit amount value of 7000. Struggling a bit with the required search(s) to get my desired dashboard results. Any help would be appreciated, thank you!        
Dear All, I need your assistance in fetching Microsoft Exchange Server logs using the Splunk Universal Forwarder. I can provide the paths for the MSG Tracking, SMTP, and OWA log files. The goal is ... See more...
Dear All, I need your assistance in fetching Microsoft Exchange Server logs using the Splunk Universal Forwarder. I can provide the paths for the MSG Tracking, SMTP, and OWA log files. The goal is to configure the Universal Forwarder to collect these logs and forward them to a central Splunk server. Given that the Splunk documentation indicates that the MS Exchange App is end-of-life (EOL), is it necessary to use an add-on? The documentation suggests creating GPO policies and making other changes. However, in IBM QRadar, the process is simpler: you install the WinCollect agent, specify the paths for MSG Tracking, SMTP, and OWA logs, and the agent collects and forwards the logs to the QRadar Console. The Auto Discovery feature in QRadar then creates the log source automatically. Is there a simpler and more straightforward method to collect these logs using the Splunk Universal Forwarder? Thank you in advance for your assistance.