All Topics

Top

All Topics

Hello, Need some help here.  The goal is to pass one IP_Address found in inner search to outer search. IP is correctly extracted, but I'm getting following error from "where" command and clueless a... See more...
Hello, Need some help here.  The goal is to pass one IP_Address found in inner search to outer search. IP is correctly extracted, but I'm getting following error from "where" command and clueless at this point.  Here's the error: Error in 'where' command: The operator at '10.132.195.72' is invalid. And here's the search: index=ipam sourcetype=data earliest=-48h latest=now() | where cidrmatch(name, IP_Address) [ search index=networksessions sourcetype=microsoft:dhcp (Description=Renew OR Description=Assign OR Description=Conflict) earliest=-15min latest=now() | head 1 | return ($IP_Address) ]  
Hello, I am new to splunk and having an issue with the following command: SendersMNO="*" NOT ("VZ", "0", "Undefined") | where SenderType= "Standard"| stats count as Complaints by SendersAddress | so... See more...
Hello, I am new to splunk and having an issue with the following command: SendersMNO="*" NOT ("VZ", "0", "Undefined") | where SenderType= "Standard"| stats count as Complaints by SendersAddress | sort 10 -Complaints | table SendersAddress, SendersMNO, Complaints   The command work; however, the result column for SendersMNO is not producing any results, any reason why? All help is appreciated.
Add-on: https://splunkbase.splunk.com/app/3662/ Known Affected: 4.8.1 Symptoms: You begin to predominantly see Hexadecimal events in your Cisco FireSIGHT Index/Sourcetype instead of real data, an... See more...
Add-on: https://splunkbase.splunk.com/app/3662/ Known Affected: 4.8.1 Symptoms: You begin to predominantly see Hexadecimal events in your Cisco FireSIGHT Index/Sourcetype instead of real data, and you see large gaps between events (usually ~10 minutes, the time it takes for it to roll over a file). The 'Source' also ends with '.log.swp' instead of '.log'. Cause: $SPLUNK_HOME/etc/apps/TA-eStreamer/default/inputs.conf  [monitor://$SPLUNK_HOME/etc/apps/TA-eStreamer/bin/encore/data] disabled = 0 source = encore sourcetype = cisco:estreamer:data crcSalt = <SOURCE> The issue I believe is with the bolded line 'source = encore' because 'crcSalt = <SOURCE>' is also specified. Since all files have the same Source, all files have the same crcSalt which is why the actual '.log' is not collected. The '.swp' manages to get collected as Splunk checks the '.log' and since swp is a very short lived file Splunk accidentally collects a lot of garbage unrelated to the actual file contents (sorry Linux Admins for butchering the technical detail). Solution: Edit $SPLUNK_HOME/etc/apps/TA-eStreamer/default/inputs.conf and comment out the Source line, then restart Splunk services.   If someone knows of a way to override (via Local inputs.conf) source back with the filename (which changes frequently) so editing a Default inputs.conf is not necessary, please comment below. Those with the Cisco license allowing TAC Support on this add-on may want to raise this issue with them so they can fix it for new downloads and future versions -- I lack that particular license. Hope this helps someone (I did a search for encore and hex and didn't see any prior conversation on the topic).
I have a set up a single-node test instance of Splunk to try and ingest zScaler LSS (not NSS) logs via a TCP input. However, it is not ingesting any data, despite being able to see traffic via TCPDum... See more...
I have a set up a single-node test instance of Splunk to try and ingest zScaler LSS (not NSS) logs via a TCP input. However, it is not ingesting any data, despite being able to see traffic via TCPDump on that port I have installed the latest zScaler Splunk App (v2.0.7) and the zScaler Technical Add-on (v3.1.2)     [root@ip-10-127-0-113 apps]# ls | grep scaler TA-Zscaler_CIM zscalersplunkapp     via the WebUI, I have set up a TCP input on port 10000, set the sourcetype, app and index options. I have checked to make sure that Splunk is listening on TCP/10000 and can see that it is     [root@ip-10-127-0-113 apps]# netstat -antp | grep 10000 tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 7992/splunkd tcp 0 0 10.127.0.113:10000 x.x.x.x:38392 SYN_RECV - tcp 0 0 10.127.0.113:10000 x.x.x.x:51586 SYN_RECV - tcp 0 0 10.127.0.113:10000 x.x.x.x:53844 SYN_RECV -     I can't see any errors in the _internal index (although I could be searching wrong). I'm using the below search:     index=_internal "err*"     The only errors I can see relate to the 'summarize' command. Any pointers would be really appreciated. Many thanks,  
Hi  Community, How to display the saved search report to make it to  open in statistic mode and allow for downloading of a .csv file .    Currently I  have to open the report in search to get the op... See more...
Hi  Community, How to display the saved search report to make it to  open in statistic mode and allow for downloading of a .csv file .    Currently I  have to open the report in search to get the option to download the .csv .  I added "display.statistics.show = 1" option in the saved search    Thank you
Hi, I am using splunk cloud  and  I need to disable some indexes temporarily. I am using AWS add-on app to ship AWS ALB logs from an S3 bucket. My daily ingestion data is going beyond the license and... See more...
Hi, I am using splunk cloud  and  I need to disable some indexes temporarily. I am using AWS add-on app to ship AWS ALB logs from an S3 bucket. My daily ingestion data is going beyond the license and I would like to diasble these indexes temporarily.    I can see there is an option to disable an input in the inputs section, but same option is not available for index. Although in the index listing page it shows as enabled in the last column.  Would appreciate if someone has any solution for the problem mentioned above. Thanks.      Muzeeb
Hi, Could you please assist to change my community username?   Regards,  
Hello! My objective is to read the values of a Spunk table visualization from a dashboard into a JavaScript object for further processing.  I'm not sure what object yet, but my main issue lies with ... See more...
Hello! My objective is to read the values of a Spunk table visualization from a dashboard into a JavaScript object for further processing.  I'm not sure what object yet, but my main issue lies with iterating through the table and extracting the cell values. Can anybody provide some sample JS code for identifying the table object and interating through its values? Thanks! Andrew
Hi I would like to know the list of users logging in from which region/ip
Hi all, I'm trying to find which programs from a given list haven't raised an event in the eventlog in the last timeperiod to create an alert based on it. For an individual alert I have  index=eve... See more...
Hi all, I'm trying to find which programs from a given list haven't raised an event in the eventlog in the last timeperiod to create an alert based on it. For an individual alert I have  index=eventlogs SourceName="my program" | stats count as COUNT_HEARTBEAT | where COUNT_HEARTBEAT=0 which works. How can I supply a list of programs and list which of them have a COUNT_HEARTBEAT of 0 so that I can make a generic alert?   Thanks,   Kind regards,   Ian
Hi Guys, I am new to splunk. I need to run a query to extract the system name value which is repeated twice in the same log event. Logs in one event are: user: user1 system: system1 user:user2 system:... See more...
Hi Guys, I am new to splunk. I need to run a query to extract the system name value which is repeated twice in the same log event. Logs in one event are: user: user1 system: system1 user:user2 system: system2 output should look like below: output1 output2 system1 system2 cheers.
Hello Everyone,  I am working on a dashboard with 2 event panel . and i would like to use the outcome of panel 1 as an input to my panel 2 . Can you please advise what is the optimal way to take a s... See more...
Hello Everyone,  I am working on a dashboard with 2 event panel . and i would like to use the outcome of panel 1 as an input to my panel 2 . Can you please advise what is the optimal way to take a specific field output and utilise as an input in the next panel . I tried base search but did not provide result as expected. Panel 1 : <query>index=xyz sourcetype=vpn *session* | fields session, connection_name, DNS, ip_subnet, Location,user | stats values(connection_name) as connection, values(Dns) as DNS, by session | join type=inner session [ search index=abc sourcetype=vpn *Dynamic* | fields assigned_ip,session | stats values(assigned_ip) as IP by session] | table User,session,connection_name,ip_subnet,IP,DNS,Location |where user="$field1$" OR connection_name="$field2$" OR session="$field3$"</query>  Once the output is generated for the above query , i would like to leverage the value displayed for Ip_subnet and use that as input for panel 2  Panel 2: <query>|inputlookup letest.csv |rename "IP address details" as IP | xyseries Ip_subnet,Location,IP | where Ip_subnet="$Ip_subnet$"</query> In panel 2 $Ip_subnet$ is input that would be taken from value of Ip_subnet of panel 1.
Hi, I know that there is a free Splunk Fundamental 1 course with the issue of a certificate. But when I go to this page https://www.splunk.com/en_us/training/free-courses/splunk-fundamentals-1.html ... See more...
Hi, I know that there is a free Splunk Fundamental 1 course with the issue of a certificate. But when I go to this page https://www.splunk.com/en_us/training/free-courses/splunk-fundamentals-1.html and I register, it offers me single courses and not the Splunk Fundamental 1 course. How do I do Splunk Fundamental 1 with exam questions and final certificate? Thanks. Regards
Hello Splunkys  i Face some challanges right now. We run a Splunk Installation with about 50 Active Users with 10Different Roles. Now we have the need for allowing them to send them selfs alert Me... See more...
Hello Splunkys  i Face some challanges right now. We run a Splunk Installation with about 50 Active Users with 10Different Roles. Now we have the need for allowing them to send them selfs alert Messages via EMAIL. First Problem:  According to to the Docs its not possible to send a email if your not a Admin and the SMTP server needs authentication.  Secound Problem, you can not set up per role or per user sender info only system wide via GUI.   I found out that you can supply username= and Password= parameters via SPL search but this do not apply to alerts. And the Creds then show up in plaintext in the logs.  I found that you can supply creds via alert_action.conf file per app. But then the creds would show up in the git_repo where we version our apps.    Some .conf files honor ENV variables but i did not find if alert_action.conf would do so? And then they would be still accessable by CLI.   Can it be so hard for Splunk to implement something so basic as per User email sending?   Has somebody accived something similar ?   
View our Tech Talk: Admin Edition, Running a Healthy Environment on Splunk Enterprise How do you run a healthy environment, or even tell if your environment is healthy? In this Tech Talk we’ll go ov... See more...
View our Tech Talk: Admin Edition, Running a Healthy Environment on Splunk Enterprise How do you run a healthy environment, or even tell if your environment is healthy? In this Tech Talk we’ll go over the most important features of Splunk Enterprise that are required to run a healthy environment, as well as the indicators and reports you can use to diagnose and troubleshoot issues related to degraded user experiences. Tune in to learn about these topics: Using the Monitoring Console’s Health Check on Splunk Enterprise instances Configuring Features thresholds in health report monitors Configuring and running RapidDiag jobs for advanced debug of cluster or Linux system issues
PLEASE HELP! This has been driving me mad for days! Every time an event is added, its re-reading the text file from the start and re-indexing events. I am getting hundreds of duplicate events and ha... See more...
PLEASE HELP! This has been driving me mad for days! Every time an event is added, its re-reading the text file from the start and re-indexing events. I am getting hundreds of duplicate events and have tried a variety of combos in the inputs.conf, but still cant solve it! I am monitoring a series of text files. Each day a new .txt file is created and events are written into this text continuously throughout the day, until the beginning of the next, where again a new file is created. the files are named as follows. Statistics_20211104_034330_840.txt The contents of the file is as follows QPS statistics: SW-Version:3.64 [UTC+00:00] time,id,valid,invalid,mode,......[ETC ETC ETC] 2021-11-04T03:43:19+00:00,248559,1,0,A,....[ETC ETC ETC] 2021-11-04T03:43:19+00:00,248560,1,0,A,....[ETC ETC ETC] This is what I currently have in the inputs.conf [monitor://\\Lgwnasapp002\bsr$\] disabled = false index = idx_security_scanner sourcetype = QPSdata whitelist = .+Statistics_\d{8}_\d{6}_\d{1,5}\.txt crcSalt = <SOURCE> Any ideas?
View our Tech Talk: IT Edition, Accelerate Your Journey to Full-Stack AIOps with the New Splunk Content Pack for Observability Discover how you can maximize cross-domain visibility with minimal effo... See more...
View our Tech Talk: IT Edition, Accelerate Your Journey to Full-Stack AIOps with the New Splunk Content Pack for Observability Discover how you can maximize cross-domain visibility with minimal effort and time by bringing your Observability logs, metrics, and traces into the Data-to-Everything Platform with easy to see results and deep link into Splunk Observability Cloud with just two clicks. All of this value, and did we mention, it’s FREE with IT Service Intelligence? Here’s a snapshot of what all we will cover in our agenda: How to use the new Observability Content Pack to bring Splunk Synthetic Monitoring, Infrastructure Monitoring, and Application Performance Monitoring all together in one single view within Splunk IT Essentials Work and IT Service Intelligence How to drill into results in a couple clicks and deep link into Splunk Observability Cloud in context How to leverage out of the box content or quickly create custom service health and performance dashboards for DevSecOps teams and business executives How to leverage trends, history, and data for all KPIs and Entities available to leverage predictive analytics and machine learning with your threshold rules.
Hi All, I am looking to extract data from index search for below query :- need timestamp of 1st event in the day for last 30 days in a particular index and sourcetype.   Can someone help with the... See more...
Hi All, I am looking to extract data from index search for below query :- need timestamp of 1st event in the day for last 30 days in a particular index and sourcetype.   Can someone help with the query to get desired output ?
View our Tech Talk: DevOps Edition,  How to Increase Trace Cardinality with OpenTelemetry Access to high cardinality metrics provides important signals about the overall health and performance of ... See more...
View our Tech Talk: DevOps Edition,  How to Increase Trace Cardinality with OpenTelemetry Access to high cardinality metrics provides important signals about the overall health and performance of our distributed systems. It relies on the telemetry captured from these distributed workloads to determine what really went wrong. With OpenTelemetry, we can capture these metrics and easily auto-instrument our applications to begin gathering data quickly. While auto-instrumenting applications is the fastest way to start collecting data with no requirements of code changes, there may be several metrics to consider to better understand your deployment and narrow down application bottlenecks. Tune in to learn: How to modify your OpenTelemetry configuration to include the necessary metadata to identify your workloads. How span tags can reduce your Mean Time to Recovery (MTTR).
View our Tech Talk: Security Edition, Detections for Trickbots, Malicious PowerShell, and DevSecOps    The Splunk Threat Research team provides additional context to emerging threats. We cr... See more...
View our Tech Talk: Security Edition, Detections for Trickbots, Malicious PowerShell, and DevSecOps    The Splunk Threat Research team provides additional context to emerging threats. We create in-product security content that you can use right out of the box in Splunk Enterprise Security and Splunk SOAR! During the last three months, we dived into understanding how adversaries use a variety of methods to get their hands on private data. We learned how Trickbots, botnets, and webinjects work together in a cyber campaign. We explored how to use Script Block Logging to detect malicious powershell. And lastly, we looked into the typical development lifecycle to see how advanced threats infiltrate into software build pipelines, source code repositories, and container orchestrators. Watch this webinar  to learn: How Trickbots, botnets, and webinjects work together in a cyber campaign How to detect malicious powershell with script block logging How to develop detections for all phases of DevSecOps lifecycle