All Topics

Top

All Topics

Hi, I am new to splunk metrics search. I am AWS/EBS metrics to splunk. I want to calculate the average throughput and number of IOPS for my Amazon Elastic Block Store (Amazon EBS) volume. I found ... See more...
Hi, I am new to splunk metrics search. I am AWS/EBS metrics to splunk. I want to calculate the average throughput and number of IOPS for my Amazon Elastic Block Store (Amazon EBS) volume. I found solutions the solution in AWS: https://repost.aws/knowledge-center/ebs-cloudwatch-metrics-throughput-iops, I don't know how to search it in Splunk.  This is the max I can do atm    | mpreview index=my-index | search namespace="AWS/EBS" metricName=VolumeReadOps    Really appreciate, if someone help me out, 
We have Splunk Heavy Forwarder running in a couple of different regions/accounts in AWS. We need to ingest the CloudWatch Logs into Splunk Heavy Forwarder. And the architecture proposed is as follow... See more...
We have Splunk Heavy Forwarder running in a couple of different regions/accounts in AWS. We need to ingest the CloudWatch Logs into Splunk Heavy Forwarder. And the architecture proposed is as follows CloudWatch Logs (multiple accounts) >> Near-real time streaming through KDF >> S3 Bucket (Centralized bucket) >> (SQS) >> Splunk Heavy Forwarder. We are looking for a implementation document mainly for aggregating CloudWatch logs to S3 (from multiple accounts) and to improve the architecture. Direct ingestion from CloudWatch logs or KDF to Splunk is not preferred.  S3 centralized logging is preferred. We would like to reduce management overhead (hence don't prefer managing lambdas unless we have to), and also be cost effective. Kindly include implementation documentation if available.
Hello ,  i am new in Splunk and need help i get every week a vulnerability scan log with 2 main fields: "extracted_Host" and "Risk"  Risk values are: Critical, High and Medium (in the log is of... See more...
Hello ,  i am new in Splunk and need help i get every week a vulnerability scan log with 2 main fields: "extracted_Host" and "Risk"  Risk values are: Critical, High and Medium (in the log is often Medium so i must only search for Risk Medium and everything else is excluded) Extracted_Host: i get many different Host IP  I must filter which Host get which Risk (Hosts can have multiple Risk values) and what risk is falling away on which date and what risk is new  right now i am here:  Problem is i get only one host with all value fields and not how many Risk classification are really on this Host without any Time  index=nessus Risk IN (Critical,High,Medium) | fields extracted_Host Risk | eval Host=coalesce(extracted_Host,Risk,) | stats values(*) as * by Host thanks for the help  
Just wanting to know if there is a way that i check in one of the fields whether an email containing a malware has been deleted or whether it is still in the inbox?
I do a local splunk-appinspect on packages before uploading them to Splunk Cloud. Each jenkins run will 'pip install splunk-appinspect'. If the same agent has been installed, it will of course not... See more...
I do a local splunk-appinspect on packages before uploading them to Splunk Cloud. Each jenkins run will 'pip install splunk-appinspect'. If the same agent has been installed, it will of course not get installed again. Here's the job run console logs: 09:11:18 LEVEL="CRITICAL" TIME="2023-10-10 01:11:18,718" NAME="root" FILENAME="main.py" MODULE="main" MESSAGE="An unexpected error occurred during the run-time of Splunk AppInspect" 09:11:18 Traceback (most recent call last): 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/main.py", line 581, in validate 09:11:18 groups_to_validate = splunk_appinspect.checks.groups( 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/checks.py", line 205, in groups 09:11:18 check_group_modules = import_group_modules(check_dirs) 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/checks.py", line 73, in import_group_modules 09:11:18 group_module = imp.load_source(group_module_name, filepath) 09:11:18 File "/usr/lib/python3.9/imp.py", line 171, in load_source 09:11:18 module = _load(spec) 09:11:18 File "<frozen importlib._bootstrap>", line 711, in _load 09:11:18 File "<frozen importlib._bootstrap>", line 680, in _load_unlocked 09:11:18 File "<frozen importlib._bootstrap_external>", line 790, in exec_module 09:11:18 File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/checks/check_source_and_binaries.py", line 17, in <module> 09:11:18 import splunk_appinspect.check_routine as check_routine 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/check_routine/__init__.py", line 15, in <module> 09:11:18 from .find_endpoint_usage import find_endpoint_usage 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/check_routine/find_endpoint_usage.py", line 7, in <module> 09:11:18 from pyparsing import Generator 09:11:18 ImportError: cannot import name 'Generator' from 'pyparsing' (/usr/lib/python3/dist-packages/pyparsing.py) Any idea what exactly is broken and suggestion on how to solve them?
I'm getting an error when I input earliest and latest keywords with the search query. I have set the time picker corresponding to the values used in the query.  It's showing 'Unknown search command '... See more...
I'm getting an error when I input earliest and latest keywords with the search query. I have set the time picker corresponding to the values used in the query.  It's showing 'Unknown search command 'earliest'' when trying to use those commands. I'm using splunk Enterprise version.   This is my query: index=sample ServiceName="cet.prd.*" |  earliest=-3d latest=now()
Hi, I have 2 lookup which is lookup A and lookup B. My lookup A will be keep update by splunk query and my lookup B is maintain manually. Both lookup contain same fields which is Hostname, IP and OS... See more...
Hi, I have 2 lookup which is lookup A and lookup B. My lookup A will be keep update by splunk query and my lookup B is maintain manually. Both lookup contain same fields which is Hostname, IP and OS. I need to compare both lookup and bring out the non match Hostname and IP. Please assist me on this. Thank You
I have a search that gives me the total license usage in gb's for a given time:   index=_internal source=*license_usage.log type=Usage pool=* | stats sum(b) as bu | eval gbu=round(bu/1024/1024/1024... See more...
I have a search that gives me the total license usage in gb's for a given time:   index=_internal source=*license_usage.log type=Usage pool=* | stats sum(b) as bu | eval gbu=round(bu/1024/1024/1024,3) | fields gbu   I'd like to have a timechart/graph to show what the total is each hour of a given day. Is this possible to do with this timechart?  
Compatibility This is compatibility for the latest version Splunk Enterprise, Splunk Cloud, Splunk IT Service Intelligence Platform Version: 9.0, 8.2 CIM Version: 4.X
Hello, I'm trying to create a dashboard in dashboard studio but I can't use the map visualization because it doesn't render all the way through. I'll give you two examples of queries and respective ... See more...
Hello, I'm trying to create a dashboard in dashboard studio but I can't use the map visualization because it doesn't render all the way through. I'll give you two examples of queries and respective visualizations: 1) Map (Bubble)   index=************* | iplocation src | geostats latfield=lat longfield=lon count by Country     2) Map - Choropleth   index=************** | iplocation src | lookup geo_countries latitude AS lat longitude AS lon OUTPUT featureId AS country | stats count by country | geom geo_countries featureIdField=country     This only happens in dashboard studio, maps render perfectly in classic dashboards. How can I fix this? Any configuration missing?   Thank you!    
App: https://splunkbase.splunk.com/app/833 It looks like the nfsiostat.sh script is not compatible with the RHEL9. I'm testing with Rocky9.2 and the nfsiostat command output is already different to ... See more...
App: https://splunkbase.splunk.com/app/833 It looks like the nfsiostat.sh script is not compatible with the RHEL9. I'm testing with Rocky9.2 and the nfsiostat command output is already different to 7.9. EDIT: It seems to support RHEL9 explicitly (without the new columns), but NOT Rocky9. Example from 7.9:       # nfsiostat server:/mnt/yumrepo mounted on /repos/pkg.repo.d: op/s rpc bklog 33.88 0.00 read: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) 1.382 43.682 31.613 3357 (0.0%) 0.612 1.551 write: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) 4.595 138.038 30.041 1041 (0.0%) 1.659 11.039       Example from Rocky9.2: - First op/s => ops/s - 2 new metrics: "avg queue (ms)" and "errors"       server:/mnt/yumrepo mounted on /repos/pkg.repo.d: ops/s rpc bklog 0.453 0.000 read: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) avg queue (ms) errors 0.000 0.001 1.356 0 (0.0%) 0.096 0.108 0.006 0 (0.0%) write: ops/s kB/s kB/op retrans avg RTT (ms) avg exe (ms) avg queue (ms) errors 0.001 0.035 25.519 0 (0.0%) 0.562 0.600 0.027 0 (0.0%)        nfsiostat.sh script cannot parse the new format and currently I get something like this:       # /usr/ipbx/splunkforwarder/etc/apps/Splunk_TA_nix/bin/nfsiostat.sh Mount Path r_op/s w_op/s r_KB/s w_KB/s rpc_backlog r_avg_RTT w_avg_RTT r_avg_exe w_avg_exe server:/mnt/yumrepo /repos/pkg.repo.d read: write: ops/s ops/s 0.000 avg avg RTT RTT 0.000 0.001 rpc 0.096 0.108 0.001 0.453 read: 0.000 ops/s avg RTT write: kB/o ops/s rpc mounted      
Two of my indexer is not working they are not receiving data from Universal forwarder. when i run the command ./splunk display listen so it shows 9998 is listening and ./splunk list forward-server ... See more...
Two of my indexer is not working they are not receiving data from Universal forwarder. when i run the command ./splunk display listen so it shows 9998 is listening and ./splunk list forward-server gives the below result. Active forwards: 10.246.250.154:9998 (ssl) Configured but inactive forwards: 10.246.250.155:9998 10.246.250.156:9998   Let me know what i can do to activate the other two indexers
Hi, Looking for some assistance with Regex to blacklist  inputs.conf on Windows Systems.  We modified inputs.conf located: /opt/apps/splunk/etc/deployment-apps/Splunk_TA_windows/local/inputs.conf ... See more...
Hi, Looking for some assistance with Regex to blacklist  inputs.conf on Windows Systems.  We modified inputs.conf located: /opt/apps/splunk/etc/deployment-apps/Splunk_TA_windows/local/inputs.conf         Applied Regex :   blacklist1 = EventCode="4688" $XmlRegex="<Data Name='NewProcessName'> (C:\\Program Files\\Windows Defender Advanced Threat Protection\\MsSense.exe)|(C:\\Program Files (x86)\\Tanium\\Tanium Client\\TaniumCX.exe) </Data>"   I attempted all available methods to blacklist the events above, but they did not take effect. Do we need to make modifications in order to successfully blacklist them? Thanks
Hi All We are trying to get the incidents which are in open state (ie AlertStatus only equal to CREATE) . Table Out is below : Here IncidentID 1414821 has both AlertStatus = CLEAR and CREA... See more...
Hi All We are trying to get the incidents which are in open state (ie AlertStatus only equal to CREATE) . Table Out is below : Here IncidentID 1414821 has both AlertStatus = CLEAR and CREATE , this Incident ID should not get displayed . We need IncidentID only with Alertstaus = CREATE. we ran with | eval IncidentID=case(AlertStatus="CREATE" AND AlertStatus!="CLEAR",IncidentID) | table IncidentID AlertStatus  When we run an Query it should only Display IncidentID value 1437718   Thanks and Regards      
Hi All, Created test user and assign the viwer roles and provided read only access, the above screen not the test user not able to see the under the knowledge objects, how yo remove the knowledge obj... See more...
Hi All, Created test user and assign the viwer roles and provided read only access, the above screen not the test user not able to see the under the knowledge objects, how yo remove the knowledge objects? Please help me the process? I need to remove tags,eventypes,lookups,userinterface..................  
I am trying to setup the drop down value from one dashboard to another dashboard. On the first dashboard I setup the interactions to set the token value. Do I need to setup anything on the second das... See more...
I am trying to setup the drop down value from one dashboard to another dashboard. On the first dashboard I setup the interactions to set the token value. Do I need to setup anything on the second dashboard?
Hello! I'm working on setting up the integration between Splunk SOAR and Splunk using the Splunk App for SOAR Export. I was able to configure my SOAR server in the app and verify connectivity, but I... See more...
Hello! I'm working on setting up the integration between Splunk SOAR and Splunk using the Splunk App for SOAR Export. I was able to configure my SOAR server in the app and verify connectivity, but I'm running into errors when trying to use the alert action associated with the app. When using the "Send to SOAR" alert action, I receive "Alert script returned error code 5" in the logs. I wasn't able to find any information regarding this error code so I'm not sure what could be causing it. Any help would be appreciated, thank you!
Hey Community, We have 2 BIG-IP load balancer VMs and need to have the OS logs (like audit.d) forwarded to Splunk. So, this is not about the F5 application logs themselves, but the OS logs from the ... See more...
Hey Community, We have 2 BIG-IP load balancer VMs and need to have the OS logs (like audit.d) forwarded to Splunk. So, this is not about the F5 application logs themselves, but the OS logs from the underlying system. Is there a way to do this? Much appreciate your support.
I have time series data like this: _time digital_value: can be either 0.1 or 1 (see Note) analog_value: can be 0, 100, 500, 1000, 5000, 10000 Note) It's actually 0 or 1, but 0 doesn't show in a... See more...
I have time series data like this: _time digital_value: can be either 0.1 or 1 (see Note) analog_value: can be 0, 100, 500, 1000, 5000, 10000 Note) It's actually 0 or 1, but 0 doesn't show in a bar graph.   I want to plot this data in a diagram like this: X axis = _time digital_value=0.1 as a red bar digital_value=1 as a green bar analog_value as an overlaid line graph, with log scale Y axis To colorize digital_value, I understand I must split it into two series, like this:   | digital_value_red = if(digital_value=0.1, 0.1, null()) | digital_value_green = if(digital_value=1, 1, null()) | fields -digital_value   However, this creates two bars per data point, where only the non-null one is shown and the other one leaves a gap. That way, I don't have equally spaced bars along the X axis any more. See this example:       So, stacked bars? Yes, but that doesn't work with log scale Y axis for the overlaid line graph. So, calculate log(analog_value)  and plot that a linear Y axis? While that produces a proper visual, you can't read the value of analog_value any more (only it's log).   Any ideas how I can achieve a colorized bar graph + log scale overlay?
Hi Splunkers,    I'm having a drodown for index_value with console, standard and aws as options, also having separate pie charts for standard, console and aws, when we click the pie chart in any one... See more...
Hi Splunkers,    I'm having a drodown for index_value with console, standard and aws as options, also having separate pie charts for standard, console and aws, when we click the pie chart in any one of these 3, it will take to another dashboard, over there i need to mark the value of index_valrue of drilldown as standard if we select the standard pie chart in the drilldown of new dashboard that is same for all other two selections. Thanks in Advance, Manoj Kumar S