All Topics

Top

All Topics

Hi Community,   I have these alerts on EDR and I want to create a correlation search to show these alerts on the Splunk   Found alert GnDump.exe was returned as Malware from the Fidelis San... See more...
Hi Community,   I have these alerts on EDR and I want to create a correlation search to show these alerts on the Splunk   Found alert GnDump.exe was returned as Malware from the Fidelis Sandbox Submission on endpoint HQ0S-IT-NAS.Jmcc2.local Found alert GnScript.exe was returned as Malware from the Fidelis Sandbox Submission on endpoint HQ0S-IT-NAS.Jmcc2.local
Hi, I'm doing prep work for my 8.2.6 upgrade to 9.0.1 and I have a couple of apps which are not listed as compatible with 9.0 in Splunkbase. These are: Splunk Datasets Add-on | Splunkbase Splunk S... See more...
Hi, I'm doing prep work for my 8.2.6 upgrade to 9.0.1 and I have a couple of apps which are not listed as compatible with 9.0 in Splunkbase. These are: Splunk Datasets Add-on | Splunkbase Splunk Secure Gateway - Get started with Splunk Secure Gateway - Splunk Documentation I note that in the Splunk docs for both of these apps that it indicates that they are built into Splunk.  My question is, should I delete these two from the etc/apps folder BEFORE I do the upgrade?
Not sure if anyone is using this script to pull logs from salesforce ecommerce, hoping to get some input from similar cases. URL: https://github.com/Pier1/sfcc-splunk-connector This script is ins... See more...
Not sure if anyone is using this script to pull logs from salesforce ecommerce, hoping to get some input from similar cases. URL: https://github.com/Pier1/sfcc-splunk-connector This script is installed on a server with a UF installed. I know the UF is pushing logs because I have other inputs.conf that's pushing logs to Splunk cloud. However in this case, the sfcc runs off a python script. That script runs okay on the server, however i'm not sure why the UF isn't forwarding it into Splunk.
Splunk Addon for Cisco ESA not working when installed on Splunk Cloud? I get this error message ("Oops. Page Not Found") when I try to open the App  
I need to calculate count of the good 15 minute intervals where (status code = 200 AND average response time < 300 milliseconds AND 99.99th percentile response time < 1500 milliseconds ) / the total ... See more...
I need to calculate count of the good 15 minute intervals where (status code = 200 AND average response time < 300 milliseconds AND 99.99th percentile response time < 1500 milliseconds ) / the total count of the intervals * 100. Could someone help. Where I already have status code and response time in two separate fields
My Query:  index=test sourcetype=true AND private AND beta |rex field=_raw "\[private]\s(?<category>\S+\s+\S+\s+\S+)" |dedup category, source|eval category=upper(category)| stats count by category ... See more...
My Query:  index=test sourcetype=true AND private AND beta |rex field=_raw "\[private]\s(?<category>\S+\s+\S+\s+\S+)" |dedup category, source|eval category=upper(category)| stats count by category |rename count as count1| appendcols [search index=test sourcetype=true AND private AND alpha |rex field=_raw "\[private]\s(?<category>\S+\s+\S+\s+\S+)" |dedup category, source|eval category=upper(category)| stats count by category |rename count as count2]| eval Total=(count1-count2) So when the 2nd query doesn't have any events i am not getting the Total column Current output if the 2nd search doesn't have any events: category      count1       xxxx                  5   Desired output: category      count1         count2     Total xxxx                  5                    0                  5
  This is the original link.  Anyone know where this has been moved to? http://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F It describes all of the props.conf attributes and which... See more...
  This is the original link.  Anyone know where this has been moved to? http://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F It describes all of the props.conf attributes and which tier they are applicable to.
How do list multiple sources in a query: sourcetype=xml source="/wealthsuite/tti/current/*"?
status=Auto, Manual car= BMW, Honda, Audi index * | stats count(status) as Total by car Is there anyway I can get the results as shown in attached picture.            
Hello fellow Splunkers, I've recently run into a bit of an issue while working on an automation process. For context, I have already reviewed the following without success: Solved: Re: Generate PDF... See more...
Hello fellow Splunkers, I've recently run into a bit of an issue while working on an automation process. For context, I have already reviewed the following without success: Solved: Re: Generate PDF from View in REST API - Splunk Community Can I Export PDF Via Rest? - Splunk Community In short, when do not ship the modified XML in my GET request I get the following response: PDF endpoint must be called with one of the following args: 'input-dashboard=<dashboard-id>' or 'input-report=<report-id>' or 'input-dashboard-xml=<dashboard-xml>' Which is more or less expected. However when I do send the modified XML in my GET request, this is what comes back: I know the endpoint is functioning as I'm able to manually export the dashboard results utilizing the web interface without issue. However the manual process tie up half my day, and is not scalable moving forward. Any advice from those who have been able to solve this would be greatly appreciated Thanks in advance    
Upgraded to Splunk 9.0.1 from Splunk 8.2.1 MS-Windows AD Objects received the dashboard error, upgraded to MS-Windows AD Objects 4.1.1 which claims to be compatible with 9.0. But even after upgrading... See more...
Upgraded to Splunk 9.0.1 from Splunk 8.2.1 MS-Windows AD Objects received the dashboard error, upgraded to MS-Windows AD Objects 4.1.1 which claims to be compatible with 9.0. But even after upgrading the same error persists. Does MS-Windows AD Objects use jQuery 3.5, does 9.0.1 not work with it or am I spinning my wheels trying to make this thing work? Have the same issue with other apps but figured I would start here. Looked through the boards found stuff on jquery 3.5 but nothing specific to AD objects 4.1.1.  Seems like most things work with the App it just always throws the error? Also tried the " clone the dashboard in the new studio option", no joy there.  Any help is appreciated   
I want to use the map command to add the total event times for each day during the time interval from 6am-6pm. For each day.... the "earliest" token in my map command = start of each day+6hours (... See more...
I want to use the map command to add the total event times for each day during the time interval from 6am-6pm. For each day.... the "earliest" token in my map command = start of each day+6hours (Start1) the "latest" token in my map command = start of each day+18 hours(End 1) Using the tokens I use the map command to search over my set Splunk search timeframe. In my map command...    1. For each day, I subtract each events  Endtime from its starttime = Diff    2. To get the total event time for each day, I sum the time differences (sum(diff)) to get  the "total_time_of_events"    3. Next I take the info_max_time - info_min_time for each search (for each earliest and latest token searches) to get the time value for each 12 hour day. 4.  Finally I divide the total_event_time by the (search_time_span*100) for each search to get the total time percentage of events being pulled into Splunk by day YET it is not working!! My search returns "No results found". May I please have help? What am I doing wrong? CODE: |table BLANK hour date_mday date_month date_year |bin span=1d _time |eval Month=case(date_month="august","8") |eval Start=Month+"/"+date_mday+"/"+date_year |eval start= strptime(Start,"%m/%d/%y") |eval Start1=start+21600 |eval End1=start+64800 |map search="search (index...) earliest=$Start1$ latest=$End1$ |bin span=1d _time|dedup _time |eval timeend=strptime(DateEnd,\"%m/%d%Y %I:%M:%S %p\") |eval timestart=strptime(DateStart,\"%m/%d/%Y %I:%M:%S %p\") |eval diff=round(timeend-timestart)|stats sum(diff) as total_time_of_events by BLANK |addinfo |eval IntTime= info_max_time-info_min_time |eval prcntUsed=round((total_time_of_events/(IntTime))*100) |rename prcntUsed as Percent_of_event_time"
Please let me know if anyone has experience bringing Guardicore data in other than using a Heavy Forwarder. Thank you!  
Hello, Data in CyberArk comes through the Syslog Server and CyberArk TA needs to be installed into Search head (or search head cluster) based on the SPLUNK web site (https://docs.splunk.com/Documen... See more...
Hello, Data in CyberArk comes through the Syslog Server and CyberArk TA needs to be installed into Search head (or search head cluster) based on the SPLUNK web site (https://docs.splunk.com/Documentation/AddOns/released/CyberArk/Installation). I installed this TA directly into the Syslog server, but not working as expected. How I would configure, Syslog, SHC, and CyberArk? Any help would be highly appreciated. Thank you! 
I was searing for a simple way to convert all types of mac address to "more" standard format.  Found various solution, but not a single line that I did like, so I made one. This will convert any mac... See more...
I was searing for a simple way to convert all types of mac address to "more" standard format.  Found various solution, but not a single line that I did like, so I made one. This will convert any mac format to XX:XX:XX:XX:XX:XX. (Output can be modified to format of your choice.)   | rex mode=sed field=mac "s/[^0-9a-fA-F]//g s/(..)(..)(..)(..)(..)(..)/\1:\2:\3:\4:\5:\6/g y/abcdef/ABCDEF/"   s/[^0-9a-fA-F]//g remove all that are not 0-9 a-z and A-Z (all symbols are gone) s/(..)(..)(..)(..)(..)(..)/\1:\2:\3:\4:\5:\6/g set the output format to xx:xx:xx:xx:xx:xx y/abcdef/ABCDEF/ change to upper case  
I have 2 roles A and B - they both inherit only from "user" role. If they create a dashboard in search they cannot edit the permissions to share the dashboard to "App" so the other role or users in... See more...
I have 2 roles A and B - they both inherit only from "user" role. If they create a dashboard in search they cannot edit the permissions to share the dashboard to "App" so the other role or users in the same role can see their dashboard. By default it is built and remains "private". If I add all capabilities under "power" role (that aren't in "user") to roles A and B they still cannot edit permissions on their own dashboard to share to "app" context so the dashboard an be shared in search app. If I add "power" to inheritance of roles A and B roles then they can edit the permissions. What am I missing?
How do I get a count of Low, Medium, High, Critical in a Splunk Search?   This is the current search I am using: `get_tenable_index` sourcetype="tenable:sc:vuln" severity=Low OR severity=Medium... See more...
How do I get a count of Low, Medium, High, Critical in a Splunk Search?   This is the current search I am using: `get_tenable_index` sourcetype="tenable:sc:vuln" severity=Low OR severity=Medium OR severity=High OR severity=Critical | dedup plugin_id, port, protocol, sc_uniqueness, source | eval key=plugin_id."_".port."_".protocol."_".sc_uniqueness."_".source | table severity, synopsis, solution, port, protocol, ip | outputlookup append=true key_field=key sc_vuln_data_lookup
This blog post is part of an ongoing series on OpenTelemetry. Curious about OpenTelemetry but more interested in logs than APM tracing or metrics? Look no further! This blog post will walk you thro... See more...
This blog post is part of an ongoing series on OpenTelemetry. Curious about OpenTelemetry but more interested in logs than APM tracing or metrics? Look no further! This blog post will walk you through your first OpenTelemetry Logging pipeline... WARNING: WE ARE DISCUSSING A CURRENTLY UNSUPPORTED CONFIGURATION.  When sending data to Splunk Enterprise, we currently only support the use of the OpenTelemetry Collector in Kubernetes environments. As always, use of the Collector is fully supported to send data to Splunk Observability Cloud. The OpenTelemetry project is the second largest project of the Cloud Native Computing Foundation (CNCF). The CNCF is a member of the Linux Foundation and besides OpenTelemetry, also hosts Kubernetes, Jaeger, Prometheus, and Helm among others. OpenTelemetry defines a model to represent traces, metrics, and logs. Using this model, it orchestrates libraries in different programming languages to allow folks to collect this data. Just as important, the project delivers an executable named the OpenTelemetry Collector, which receives, processes, and exports data as a pipeline. The OpenTelemetry Collector uses a component-based architecture, which allows folks to devise their own distribution by picking and choosing which components they want to support. Please see our official documentation to install the collector. At Splunk, we manage the distribution of our version of the OpenTelemetry collector under this open-source repository. The repository contains our configuration and hardening parameters as well as examples. The OpenTelemetry collector works using a configuration file written in YAML. The collector defines metrics pipelines. For our case, we have defined a pipeline that reads from a file and sends its data to Splunk.         pipelines: logs: receivers: [filelog] processors: [batch] exporters: [splunk_hec/logs]         We use a processor named the batch processor to place multiple entries in one payload, so we avoid sending many messages at once to Splunk. Note we have placed our pipeline under the name logs. That means we intend to use this pipeline to ingest log records. If you have multiple log pipelines, they must start with logs, followed by a slash, and a unique name, such as:         pipelines: logs: … logs/other: … logs/anotherone: …         This notation is also used for other components, such as filelog or splunk_hec in our example. Going back to our pipeline, we have defined it to contain a receiver named "filelog". We have not written it down yet, so let’s write down our receivers:         receivers: filelog: include: [ /output/file.log ]         Our filelog receiver will follow the file /output/file.log and ingest its contents. We also need to define our Splunk HEC exporter. Here is our exporters section:         exporters: splunk_hec/logs: token: "00000000-0000-0000-0000-0000000000000" endpoint: "https://splunk:8088/services/collector" source: "output" index: "logs" max_connections: 20 disable_compression: false timeout: 10s tls: insecure_skip_verify: true         This exporter defines the configuration settings of a Splunk HEC endpoint. More documentation and examples are available as part of the OpenTelemetry Collector Contrib github repository. This particular Splunk endpoint says it will send data to the logs index, under the source “output”, to a Splunk instance located under the Splunk hostname, with a HEC token that is just a set of zeroes. We’re now going to set all the pieces in motion to deliver to you this example end to end. First, we are going to define a program that outputs data to a file.         bash -c "while(true) do echo \"$$(date) new message\" >> /output/file.log ; sleep 1; done"         This bash script will send the current date, accompanied with "new message", every second, until told to stop. Second, we prepare a simple Splunk Enterprise Docker container to run for this example. We set up its logs index with a splunk.yml configuration file:         splunk: conf: indexes: directory: /opt/splunk/etc/apps/search/local content: logs: coldPath: $SPLUNK_DB/logs/colddb datatype: event homePath: $SPLUNK_DB/logs/db maxTotalDataSizeMB: 512000 thawedPath: $SPLUNK_DB/logs/thaweddb         We load up this file by mounting as a volume. We also run the container to set up a default HEC token, open ports, accept the Splunk license, and set a default admin password. Obviously, this is only useful here for our demonstration. There are more interesting configuration possibilities if you follow along this Github repository for Splunk Docker, and be sure to check out Splunk Operator for larger, production-grade deployments. All told, our Splunk server looks like this in our Docker Compose:         # Splunk Enterprise server: splunk: image: splunk/splunk:latest container_name: splunk environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_HEC_TOKEN=00000000-0000-0000-0000-0000000000000 - SPLUNK_PASSWORD=changeme ports: - 18000:8000 healthcheck: test: ['CMD', 'curl', '-f', 'http://localhost:8000'] interval: 5s timeout: 5s retries: 20 volumes: - ./splunk.yml:/tmp/defaults/default.yml - /opt/splunk/var - /opt/splunk/etc         With that, you are ready to try out our example. To run this example, you will need at least 4 gigabytes of RAM, as well as git and Docker Desktop installed. First, check out the repository using git clone:         git clone https://github.com/signalfx/splunk-otel-collector.git         Using a terminal window, navigate to the folder examples/otel-logs-splunk. Type:         docker-compose up         This will start the OpenTelemetry Collector, our bash script generating data, and Splunk Enterprise. Your terminal will display information as Splunk starts. Eventually, Splunk will display this table to let you know it is available:         splunk | Wednesday 04 May 2022 02:04:18 +0000 (0:00:00.818) 0:00:47.544 ********* splunk | =============================================================================== splunk | splunk_common : Start Splunk via CLI ----------------------------------- 11.97s splunk | splunk_common : Update Splunk directory owner --------------------------- 6.45s splunk | splunk_common : Update /opt/splunk/etc ---------------------------------- 5.42s splunk | splunk_common : Get Splunk status --------------------------------------- 2.50s splunk | Gathering Facts --------------------------------------------------------- 1.67s splunk | splunk_common : Hash the password --------------------------------------- 1.23s splunk | splunk_common : Set options in logs ------------------------------------- 1.11s splunk | splunk_common : Test basic https endpoint ------------------------------- 1.00s splunk | Check for required restarts --------------------------------------------- 0.82s splunk | splunk_standalone : Check for required restarts ------------------------- 0.77s splunk | splunk_standalone : Update HEC token configuration ---------------------- 0.72s splunk | splunk_standalone : Get existing HEC token ------------------------------ 0.69s splunk | splunk_standalone : Setup global HEC ------------------------------------ 0.69s splunk | splunk_common : Check for scloud ---------------------------------------- 0.66s splunk | splunk_common : Generate user-seed.conf (Linux) ------------------------- 0.63s splunk | splunk_common : Find manifests ------------------------------------------ 0.54s splunk | splunk_common : Wait for splunkd management port ------------------------ 0.43s splunk | splunk_common : Cleanup Splunk runtime files ---------------------------- 0.40s splunk | splunk_common : Check for existing splunk secret ------------------------ 0.30s splunk | splunk_common : Check for existing installation ------------------------- 0.30s splunk | =============================================================================== splunk | splunk | Ansible playbook complete, will begin streaming splunkd_stderr.log         Now, you can open your web browser and navigate to http://localhost:18000. You can log in as admin/changeme. You will be met with a few prompts as this is a new Splunk instance. Make sure to read and acknowledge them, and open the default search application. In this application, enter this search to look for logs:         index="logs"         The latest logs generated by the bash script will show: After exploring this example, you can press Ctrl+C to exit from Docker Compose. Thank you for following along! With this example, you have deployed a simple pipeline to ingest the contents of a file into Splunk Enterprise. If you found this example interesting, feel free to star the repository! Just click the star icon in the top right corner. Any ideas or comments? Please open an issue on the repository. — Antoine Toulme,  Senior Engineering Manager, Blockchain & DLT
Using the below query to get the daily avg user in during biz hours:  index=pan_logs sourcetype=json_no_timestamp metricname="field total user" |bin _time span=3h | stats latest(metricvalue) A... See more...
Using the below query to get the daily avg user in during biz hours:  index=pan_logs sourcetype=json_no_timestamp metricname="field total user" |bin _time span=3h | stats latest(metricvalue) AS temp_count by metricname _time | stats sum(temp_count) as "Users" by _time |eval Date=strftime(_time,"%m/%d/%y") |eval bustime=_time, bustime=strftime(bustime, "%H") |eval day_of_week = strftime(_time,"%A") |where ( bustime > 8 and bustime < 18) AND NOT (day_of_week="Saturday" OR day_of_week="Sunday") |eventstats avg(Users) as DailyAvgUsers by Date |eval DailyAvgUsers = round(DailyAvgUsers) |table Date day_of_week DailyAvgUsers but the query gives 3 counts per day  while i want only 1 for a day, when i change span to 6h , it gives me one count , but since i am counting only between 8AM to 6PM , it gives me no count when i run the search at 12PM Monday with 6h span.   How I can get one avg count per day? with time span = 3h   
Hi Experts , i want to show Column1 timestamp selected as default in Date/Time Range From not sure what i am doing wrong but when i select the different date its updating <fieldset submitButton=... See more...
Hi Experts , i want to show Column1 timestamp selected as default in Date/Time Range From not sure what i am doing wrong but when i select the different date its updating <fieldset submitButton="false" autoRun="false"> <input type="time" token="field2" searchWhenChanged="true"> <label>Column 1</label> <default> <earliest>1661144400</earliest> <latest>1661230800</latest> </default> <change> <eval token="timeRangeEarliestearliest">if(isnum($field2.earliest$), $field2.earliest$, relative_time(now(), $field2.earliest$))</eval> <eval token="timeRangeLatestearliest">if(isnum($field2.latest$), $field2.latest$, relative_time(now(), $field2.latest$))</eval> <eval token="prettyPrinttimeRangeFromTimeearliest">strftime($timeRangeEarliestearliest$, "%a, %e %b %Y")</eval> <eval token="prettyPrinttimeRangeToTimeearliest">strftime($timeRangeLatestearliest$, "%a, %e %b %Y")</eval> </change> </input> <input type="time" token="field1" searchWhenChanged="true"> <label>Column 2</label> <default> <earliest>@d</earliest> <latest>now</latest> </default> <change> <eval token="timeRangeEarliestlatest">if(isnum($field1.earliest$), $field1.earliest$, relative_time(now(), $field1.earliest$))</eval> <eval token="timeRangeLatestlatest">if(isnum($field1.latest$), $field1.latest$, relative_time(now(), $field1.latest$))</eval> <eval token="prettyPrinttimeRangeFromTimelatest">strftime($timeRangeEarliestlatest$, "%a, %e %b %Y")</eval> <eval token="prettyPrinttimeRangeToTimelatest">strftime($timeRangeLatestlatest$, "%a, %e %b %Y")</eval> </change> </input> </fieldset> <row> <panel> <html> <h3>Date/Time Range From</h3> <table> <tr> <td>From:</td> <td>$prettyPrinttimeRangeFromTimeearliest$</td> </tr> <tr> <td>To:</td> <td>$prettyPrinttimeRangeToTimeearliest$</td> </tr> </table> </html> </panel> </row> <row> <panel> <html> <h3>Date/Time Range</h3> <table> <tr> <td>From:</td> <td>$prettyPrinttimeRangeFromTimelatest$</td> </tr> <tr> <td>To:</td> <td>$prettyPrinttimeRangeToTimelatest$</td> </tr> </table> </html> </panel> </row>