All Topics

Top

All Topics

  The following query retrieves confroom_ipaddress values from the lookup table that do not match IP addresses found in the indexed logs: | inputlookup lookup_ist_cs_checkin_rooms.csv where NOT [s... See more...
  The following query retrieves confroom_ipaddress values from the lookup table that do not match IP addresses found in the indexed logs: | inputlookup lookup_ist_cs_checkin_rooms.csv where NOT [search index=fow_checkin message="display button:panel-*" | rex field=message "ipaddress: (?<ipaddress>[^ ]+)" | stats values(ipaddress) as confroom_ipaddress | table confroom_ipaddress] | rename confroom_ipaddress as ipaddress1 I would like to add an additional condition to include IP addresses that match those found in the following logs:   index=fow_checkin "Ipaddress(from request header)" | rex field=message "IpAddress\(from request header\):\s*(?<ip_address>\S+)$" | stats values(ip_address) as ip_address2 This means we need to include IP addresses from lookup_ist_cs_checkin_rooms.csv that match with the message "Ipaddress(from request header)" andexclude IP addresses from lookup_ist_cs_checkin_rooms.csv that match with the message "display button:panel-*"  as well. Please help.
Hi all, hoping someone can help me with this query. i have a data set that looks at a process and how long it takes to implement. for example, each event will be populated with a start date and an... See more...
Hi all, hoping someone can help me with this query. i have a data set that looks at a process and how long it takes to implement. for example, each event will be populated with a start date and an end date. i want to create a calendar view that shows the schedule of the processes in implementation, for example: process 1 start date 12/08/2024, end date 16/08/2024 (5 days implementation) process 2 start date 12/08/2024, end date 12/08/2024 (1 day implementation) process 3 start date 13/08/2024, end date 15/08/2024 (3 days implementation) process 4 start date 14/08/2024, end date 16/08/2024 (2 days implementation) I want to be able to produce a graph or a calendar view that will show how many process' we have in implementation, counting each day of their implementation period (based on start and end date) so for the above example it would look like: Date                        count of Process' in implementation 12/08/2024       2 (process 1 and 2) 13/08/2024       2 (process 1 and 3) 14/08/2024       3 (process 1, 3 and 4) 15/08/2024       3 (process 1, 3 and 4) 16/08/2024       2 ((process 1 and 4) any help greatly appreciated 
Hi, I want to setup a home lab like splunk Enterprise and splunk forwarder on the same os to pull the logs into splunk. Is it possible to setup in this way.  
Hello everyone, Please check the below data : ERROR 2024-08-09 14:19:22,707 email-slack-notification-impl-flow.BLOCKING @3372f96f] [processor: email-slack-notification-impl-flow/processors/2/rout... See more...
Hello everyone, Please check the below data : ERROR 2024-08-09 14:19:22,707 email-slack-notification-impl-flow.BLOCKING @3372f96f] [processor: email-slack-notification-impl-flow/processors/2/route/0/processors/0; event: 5-03aca501-42b3-11ef-ad89-0a2944cc61cb] error.notification.details: { "correlationId" : "5-03aca501-42b3-11ef-ad89-0a2944cc61cb", "message" : "Error Details", "tracePoint" : "FLOW", "priority" : "ERROR", } ERROR 2024-08-09 14:19:31,389 email-slack-notification-impl-flow.BLOCKING @22feab4f] [processor: email-slack-notification-impl-flow/processors/2/route/0/processors/0; event: 38de9c30-49eb-11ef-8a9e-02cfc6727565] error.notification.details: { "correlationId" : "38de9c30-49eb-11ef-8a9e-02cfc6727565", "message" : "Error Details", "priority" : "ERROR", } The above 2 blocks of data are coming as one event but I want them to be 2 events each starting from keyword "Error". Below is my props.config entry for same but not working: applog_test] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom disabled = false pulldown_type = true BREAK_ONLY_BEFORE = date SHOULD_LINEMERGE = true TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N TIME_PREFIX=ERROR\s+ Please help how to fix this. Thanks in advance!    
Hi Splunkers, I am monitoring my websites using Splunk website monitoring, I have configured an alert which sends me email alert whenever my website goes down or takes time to response. Now I want t... See more...
Hi Splunkers, I am monitoring my websites using Splunk website monitoring, I have configured an alert which sends me email alert whenever my website goes down or takes time to response. Now I want that whenever my website came back UP again or functions normally then I should receive and alert email also to notify me that website is working fine now. Could you please shower you knowledge here and help me to setup this alert.  TIA.
I am trying to disable the Splunk Secure Gateway app in a clustered environment. However I dont see an option to disable the app in Apps -> Manage Apps. It only displays the current status of the app... See more...
I am trying to disable the Splunk Secure Gateway app in a clustered environment. However I dont see an option to disable the app in Apps -> Manage Apps. It only displays the current status of the app, which is "Active". I also tried the same in a single node installation, where there is an option to disable the app just next to its current status in the same menu, i.e. Apps -> Manage Apps.   So, how can I disable the Splunk Secure Gateway in the clustered environment ?
Hi guys , i wanted to see predictive monitoring of in ITSI product, how can i see the free tour of it kindly help me please.
We have enabled On Demand Capture Session for capturing the memory leaks on of our node. After the session ends, we are unable to see the detection dashboard   
Hi All, i need to consolidate / correlate data from 2 different indexes as explained below. I have gone thru multiple posts on this forum from experts relevant to this but somehow for my use case, t... See more...
Hi All, i need to consolidate / correlate data from 2 different indexes as explained below. I have gone thru multiple posts on this forum from experts relevant to this but somehow for my use case, the same query ain't working. I have below situation: In Index=windows , the field "host" contains all the different hosts sending logs to Splunk. For example: Host01, Host02 etc. In another index=cmdb, the field "dv_name" contain the same hostnames sending logs.   Also, there are other fields like dv_status and dv_os in this index which i need to be part of final output So as explained above,  the common link is the host field, its name is different across the 2 index, but the values are same.   When i run the following 2 queries to get my expected output, it only pulls data from windows index. It completely avoids the other cmdb index, irrespective of the fact the cmdb index has data / events from same hosts in the time range whatever i select.     (index=windows) OR (index=cmdb sourcetype="snow:cmdb_ci_server" dv_name=*) | eval asset_name=coalesce(dv_name, host) | stats dc(index) as idx_count, values(index) values(dv_os), values(dv_install_status) by asset_name     Output it it showing:   asset_name idx_count index dv_os dv_status Host01 1 windows     Host02 1 windows       Expected output asset_name idx_count index dv_os dv_install_status Host01 2 windows, cmdb Windows Server Production Host02 2 windows, cmdb Windows Server Test
Hi  Can we create widgets that display the Drive utilized in Volume like MyComputer? I have to create a dashboard like the one above for separate partitions. Let me know if it is possible ... See more...
Hi  Can we create widgets that display the Drive utilized in Volume like MyComputer? I have to create a dashboard like the one above for separate partitions. Let me know if it is possible Thanks ^ Post edited by @Ryan.Paredez. Split the post into a new one and updated the subject. 
Hi ,   I am new to Spunk just got Free Cloud Trial. I did the followings : 1- Logged in to Cloud trial instance 2- Created Index name winpc   3- App > Univeral forwarded and downloaded on Win PC... See more...
Hi ,   I am new to Spunk just got Free Cloud Trial. I did the followings : 1- Logged in to Cloud trial instance 2- Created Index name winpc   3- App > Univeral forwarded and downloaded on Win PC 4- Installed Forwarded on WInPC during step on use this agent with selected use with cloud instance 5- Receiver index left blank had no idea about my splun instance FQDN /IP 6- Checked services Splunk universal forwarded service running as Logon As Local system Issues : 1- No Logs I can see into index winpc created after waiting a hour or so 2- How can I tell forwarded to forward win and sysmon logs too should I edit inputs.conf file ?   Kindly guide and help so that I may get logs and learn any further .   Regards  
Hi,  I have previously had Splunk Dev license which I use for testing. As my license expired, I requested for a new one. It's been more that 3 weeks, yet my request is still pending. Any help is ap... See more...
Hi,  I have previously had Splunk Dev license which I use for testing. As my license expired, I requested for a new one. It's been more that 3 weeks, yet my request is still pending. Any help is appreciated.    Thanks    
In the search query, I am trying to view a csv dataset that shows clusters on a map. I manage to get a visualisation with different sized bubbles based on the values, bigger bubbles for bigger values... See more...
In the search query, I am trying to view a csv dataset that shows clusters on a map. I manage to get a visualisation with different sized bubbles based on the values, bigger bubbles for bigger values. However, once i add it to an existing dashboard, the bubbles disappear. When i navigate to "Data Configurations" -> "Layer Type" to "Marker", now the dashboard has the clusters, however they are markers of the same size instead of bubbles sized to different values.   Here is the source code of my visualisation:  {     "type": "splunk.map",     "options": {         "center": [             1.339638489909646,             103.82878183020011         ],         "zoom": 11,         "baseLayerTileServer": "https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png",         "baseLayerTileServerType": "raster",         "layers": [             {                 "type": "marker",                 "latitude": "> primary | seriesByName('latitude')",                 "longitude": "> primary | seriesByName('longitude')",                 "bubbleSize": "> primary | frameWithoutSeriesNames('_geo_bounds_east', '_geo_bounds_west', '_geo_bounds_north', '_geo_bounds_south', 'latitude', 'longitude') | frameBySeriesTypes('number')",                 "seriesColors": [                     "#7b56db",                     "#cb2196",                     "#008c80",                     "#9d6300",                     "#f6540b",                     "#ff969e",                     "#99b100",                     "#f4b649",                     "#ae8cff",                     "#8cbcff",                     "#813193",                     "#0051b5",                     "#009ceb",                     "#00cdaf",                     "#00490a",                     "#dd9900",                     "#465d00",                     "#ff677b",                     "#ff6ace",                     "#00689d"                 ]             }         ]     },     "dataSources": {         "primary": "ds_TmJ6iHdE"     },     "title": "Dengue Clusters",     "context": {},     "containerOptions": {},     "showProgressBar": false,     "showLastUpdated": false }
We have below deployement, UF ----> HF ----> IDX Uf are sending data to Hf and Hf is acting as and Intermediatry forwarder between UF and IDX. Now we want to do TLS b/w splunk components. can we do... See more...
We have below deployement, UF ----> HF ----> IDX Uf are sending data to Hf and Hf is acting as and Intermediatry forwarder between UF and IDX. Now we want to do TLS b/w splunk components. can we do TLS between HF and IDX and leave UFs. Will UF data will also be TLS complient? If not will UF still sends data to IDXs or we will stop receiving logs all together?
Hello, I am experiencing a periodic issue with smartstore where a bucket will try to be evicted then proceeds to fail and does that cycle thousands of times. The indexer IO is fine, the bucket is ... See more...
Hello, I am experiencing a periodic issue with smartstore where a bucket will try to be evicted then proceeds to fail and does that cycle thousands of times. The indexer IO is fine, the bucket is warm, we have enough cache sizing, and I have not been able to correlate any cache logs with when these failures begin on multiple indexer nodes in the cluster (~33% of indexers). 2 questions: * What is an urgent mode eviction? * What can cause warm buckets to be unable to be evicted when they rolled to warm ~a full day earlier?
Peace be upon you. I am now running correlation searches and I do not have data to fully test them. I want to activate them in order to protect the company from any attack. I have MITRE ATT&CK Compli... See more...
Peace be upon you. I am now running correlation searches and I do not have data to fully test them. I want to activate them in order to protect the company from any attack. I have MITRE ATT&CK Compliance Security Content But I do not know where to start and how to arrange myself I hope for advice
We are using Splunk cloud in our enterprise and as part of an automation project we want programatic way for doing Splunk search. Based on Splunk website we found that there is node module splunk-sdk... See more...
We are using Splunk cloud in our enterprise and as part of an automation project we want programatic way for doing Splunk search. Based on Splunk website we found that there is node module splunk-sdk (https://www.npmjs.com/package/splunk-sdk) using which we can access Splunk even though the module is not mentioning explicitly anything about Splunk cloud.  Following is the code we attempted but its failing to connect. Would like to know if any special configuration needs to be done in order to achieve the connection.   (async=>{ let splunkjs = require('splunk-sdk'); let service = new splunkjs.Service({username: "myusername", password: "***"}); async function myFunction() { try { await service.login(); console.log("Login was successful: " + success); let jobs = await jobs.fetch(); let jobList = jobs.list(); for(let i = 0; i < jobList.length; i++) { console.log("Job " + i + ": " + jobList[i].sid); } } catch(err) { console.log(err); }   Following is the error we are getting. Please help in understanding and resolving this issue if anyone has encountered the same issue.
Hi team, New user here.  I was going through https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Admin/ConfigureIPAllowList I have the sc_admin role, I have also enabled token authenticatio... See more...
Hi team, New user here.  I was going through https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Admin/ConfigureIPAllowList I have the sc_admin role, I have also enabled token authentication and my splunk cloud version is greater then 8.2.2201  I wanted to add certain IP address to allow list, However I don't see the option to add IP address (screenshot attached)
Calling all cybersecurity professionals! The latest addition to the Splunk certification family is here, and it’s designed to validate your skills as a SOC Engineer. The Splunk Certified Cybersecurit... See more...
Calling all cybersecurity professionals! The latest addition to the Splunk certification family is here, and it’s designed to validate your skills as a SOC Engineer. The Splunk Certified Cybersecurity Defense Engineer certification focuses on using Splunk Enterprise Security and Splunk SOAR to optimize workflows, craft and tune effective detections, and build automations following industry best practices. Ready to take your career to the next level? Read on to find out how! Level Up to a Defense Engineering Career Path Are you ready to move into cybersecurity defense engineering for security operations centers (SOC)? This exciting career path involves analyzing security vulnerabilities and threats to create and tune detections, incorporating risk, developing and following security processes and programs, and efficiently automating standard operating procedures. By becoming a Splunk Certified Cybersecurity Defense Engineer, you’ll be equipped to handle these critical tasks and more. Who Should Pursue This Certification? Cybersecurity Professionals Take your SOC analyst career further and level up as a Splunk Certified Cybersecurity Defense Engineer. This certification is perfect for those who want to demonstrate their expertise in optimizing detection and automation in a SOC environment. Career Builders Looking to advance your career? Earning this certification will help you climb the ranks as a Splunk-certified professional, opening up new opportunities for growth and advancement in the cybersecurity field. SOC Detection Engineers Solidify your position as a cybersecurity detection engineer and enhance your efficiency with Splunk Enterprise Security and Splunk SOAR. This certification will validate your skills and knowledge, ensuring you’re at the top of your game.   Exam Details including Prerequisite  Level: Professional Prerequisites: Splunk Certified Cybersecurity Defense Analyst Length: 120 minutes Format: 100 multiple choice questions Pricing: $0 USD *while in beta Delivery: Exam is given by our testing partner, Pearson VUE   Get Prepared for the Exam Ready to take the plunge? Here are some resources to help you get started: Review exam requirements and recommendations on the Splunk Cybersecurity Defense Engineer track flowchart. Discover what to expect on the exam via the test blueprint. Get step-by-step registration assistance with the Exam Registration Tutorial.   Don’t miss out on this opportunity to validate your skills and advance your career. Get prepped to take the Splunk Certified Cybersecurity Defense Engineer exam today and take the first step towards becoming a leader in cybersecurity defense engineering! Happy learning!  – Callie Skokos on behalf of the Splunk Certification crew
I have reviewed the curl command syntax in the details section of the Add-on download page but was not able to discern how pass the following to the "| curl" command 1) How can I pass the equivalen... See more...
I have reviewed the curl command syntax in the details section of the Add-on download page but was not able to discern how pass the following to the "| curl" command 1) How can I pass the equivalent of:   '-k" or "--insecure'  ? 2) How do I pass 2 headers in the same command line ?  From the LINUX prompt, my command looks like this:    curl -X POST -H "Content-Type: application/json" -H "UUID: e42eed31-65bb-4283-ad05-33f18da75513" -k "https://abc.com/X1"  -d "{ lots of data }"