All Topics

Top

All Topics

We could see only 10 hosts in index=os sourcetype=cpu & index=os source=vmstat. We should get all the unix/linux hosts on the mentioned sourcetype & source. We are using this to generate high cpu uti... See more...
We could see only 10 hosts in index=os sourcetype=cpu & index=os source=vmstat. We should get all the unix/linux hosts on the mentioned sourcetype & source. We are using this to generate high cpu utilization, High memory utilization incidents. Like till August end we are able to see 100+ host for the mentioned source and sourcetype but after August we are not able to see 100+ host like we could see only 10.15,7  Please help me on this
i have added this file in monitoring to ingest data but data is not getting ingesting  log file path is /tmp/mountcheck.txt [monitor:///tmp/mount.txt] disabled = 0 index = Test_index sourcetype ... See more...
i have added this file in monitoring to ingest data but data is not getting ingesting  log file path is /tmp/mountcheck.txt [monitor:///tmp/mount.txt] disabled = 0 index = Test_index sourcetype =Test_sourcetype initCrcLen = 1024 crcSalt = "unique_salt_value"  
i have below stanza to ingest json data file and added in deployment server as below an in HF added props.conf file  initially  i have uploaded using splunk UI but getting events in one line [mon... See more...
i have below stanza to ingest json data file and added in deployment server as below an in HF added props.conf file  initially  i have uploaded using splunk UI but getting events in one line [monitor:///var/log/Netapp_testobject.json] disabled = false index = Test_index sourcetype = Test_sourcetype [Test_sourcetype] DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=false LINE_BREAKER=([{}\,\s]+) NO_BINARY_CHECK=true CHARSET=UTF-8 EVENT_BREAKER=([{}\,\s]+) INDEXED_EXTRACTIONS=json KV_MODE=json TRUNCATE=0 json data looks like below: [ { "Name": "test name", "Description": "", "DNSHostname": "test name", "OperatingSystem": "NetApp Release 9.1", "WhenCreated": "2/13/2018 08:24:22 AM", "distinguishedName": "CN=test name,OU=NAS,OU=AVZ Special Purpose,DC=corp,DC=amvescap,DC=net" }, { "Name": "test name", "Description": "London DR smb FSX vserver", "DNSHostname": "test name", "OperatingSystem": "NetApp Release 9.13.0P4", "WhenCreated": "11/14/2023 08:43:36 AM", "distinguishedName": "CN=test name,OU=NAS,OU=AVZ Special Purpose,DC=corp,DC=amvescap,DC=net" } ]
Hi, We have created an application using splunk add on builder(https://apps.splunk.com/app/2962/). Created a python script for the alert action. while validating the application created(add-on) we ... See more...
Hi, We have created an application using splunk add on builder(https://apps.splunk.com/app/2962/). Created a python script for the alert action. while validating the application created(add-on) we are getting two errors rest of the test cases are passed.  Sharing the errors below- First Error: {"validation_id": "v_1703053121_88", "ta_name": "TA-testaddon", "rule_name": "Validate app certification", "category": "app_cert_validation", "ext_data": {"is_visible": true}, "message_id": "7004", "description": "Check that no files have *nix write permissions for all users (xx2, xx6, xx7). Splunk recommends 644 for all app files outside of the bin/ directory, 644 for scripts within the bin/ directory that are invoked using an interpreter (e.g. python my_script.py or sh my_script.sh), and 755 for scripts within the bin/ directory that are invoked directly (e.g. ./my_script.sh or ./my_script). Since appinspect 1.6.1, check that no files have nt write permissions for all users..", "sub_category": "Source code and binaries standards", "solution": "There are multiple errors for this check. Please check \"messages\" for details.", "messages": "[{\"result\": \"warning\", \"message\": \"Suppressed 813 failure messages\", \"message_filename\": null, \"message_line\": null}, {\"result\": \"failure\", \"message\": \"A posix world-writable file was found. File: bin/ta_testaddon/aob_py3/splunktalib/splunk_cluster.py\", \"message_filename\": null, \"message_line\": null}]", "severity": "Fatal", "status": "Fail", "validation_time": 1703053540}   Second Error: {"validation_id": "v_1703053679_83", "ta_name": "TA-testaddon", "rule_name": "Validate app certification", "category": "app_cert_validation", "ext_data": {"is_visible": true}, "message_id": "7002", "description": "Check that the dashboards in your app have a valid version attribute.", "sub_category": "jQuery vulnerabilities", "solution": "Change the version attribute in the root node of your Simple XML dashboard default/data/ui/views/home.xml to `<version=1.1>`. Earlier dashboard versions introduce security vulnerabilities into your apps and are not permitted in Splunk Cloud File: default/data/ui/views/home.xml", "severity": "Fatal", "status": "Fail", "validation_time": 1703053994} Kindly help in resolving this
Hello, is there a way to see the full URL of a particular slow Transaction Snapshot? I believe that some of the slow search requests in our system could be caused by a specific user input that is a p... See more...
Hello, is there a way to see the full URL of a particular slow Transaction Snapshot? I believe that some of the slow search requests in our system could be caused by a specific user input that is a part of the dynamic URL. But in the Transaction Snapshot dashboard (or in the Transaction Snapshot overview), I only see the aggregated short URL without a user input. Full URL example: https://host/Search/userInput Transaction Snapshot dashboard: Individual transaction overview: Also, I don't think I have access to the Analytics dashboard.
Hi, I have two clustered indexers which are now constantly generating crash logs in /splunk/var/log/splunk every few minutes and is unable to figure out the cause from the crash log or the error in ... See more...
Hi, I have two clustered indexers which are now constantly generating crash logs in /splunk/var/log/splunk every few minutes and is unable to figure out the cause from the crash log or the error in splunkd.log. Would anyone here be able to shed some light on this? Splunkd Error: WARN SearchProcessRunner [19356 PreforkedSearchesManager-0] - preforked process=0/38 status=killed, signum=6, signame="Aborted", coredump=1, uptime_sec=37.282768, stime_sec=19.850199, max_rss_kb=472688, vm_minor=902282, vm_major=37, fs_r_count=608, fs_w_count=50856, sched_vol=3413, sched_invol=10923 Contents of one of the crash.log: [build b6436b649711] 2023-11-02 11:39:40 Received fatal signal 6 (Aborted) on PID 23624. Cause: Signal sent by PID 23624 running under UID 1001. Crashing thread: BucketSummaryActorThread Registers: RIP: [0x00007F0D7E2DA387] gsignal + 55 (libc.so.6 + 0x36387) RDI: [0x0000000000005C48] RSI: [0x00000000000059CC] RBP: [0x0000000000000BE7] RSP: [0x00007F0CF85F2268] RAX: [0x0000000000000000] RBX: [0x0000562A9ADF7598] RCX: [0xFFFFFFFFFFFFFFFF] RDX: [0x0000000000000006] R8: [0x00007F0CF85FF700] R9: [0x00007F0D7E2F12CD] R10: [0x0000000000000008] R11: [0x0000000000000206] R12: [0x0000562A9AC0E070] R13: [0x0000562A9AF9CFB0] R14: [0x00007F0CF85F2420] R15: [0x00007F0CF806F260] EFL: [0x0000000000000206] TRAPNO: [0x0000000000000000] ERR: [0x0000000000000000] CSGSFS: [0x0000000000000033] OLDMASK: [0x0000000000000000] Regards, Zijian
I try to make box plot graph using <viz> However, My code have this error, "Error in 'stats' command: The number of wildcards between field specifier '*' and rename specifier 'lowerquartile' do not... See more...
I try to make box plot graph using <viz> However, My code have this error, "Error in 'stats' command: The number of wildcards between field specifier '*' and rename specifier 'lowerquartile' do not match. Note: empty field specifiers implies all fields, e.g. sum() == sum(*)" and My code is this <viz type="viz_boxplot_app.boxplot"> <search> <query>index=idx_prd_analysis sourcetype="type:prd_analysis:result" corp="AUS" | eval total_time = End_time - Start_time | stats median, min, max, p25 AS lowerquartile, p75 AS upperquartile by total_time | eval iqr=upperquartile-lowerquartile | eval lowerwhisker=median-(1.5*iqr) | eval upperwhisker=median+(1.5*iqr) </query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">all</option> <option name="refresh.display">progressbar</option> </viz>  I don't use any "eval" or string words at the "stats", But it happend. How could I solve this problem?   
I'm sorry it's hard to read because I don't understand English and I'm using a translation app. Currently, I am not able to "Premise follow-up" reports in Splunk. Subsequent processing is started w... See more...
I'm sorry it's hard to read because I don't understand English and I'm using a translation app. Currently, I am not able to "Premise follow-up" reports in Splunk. Subsequent processing is started with a margin of time for the completion time of the prerequisite process. However, with this method, there is a risk that subsequent processing will start before the premise process is completed. You have a lot of reports to process, and you don't want to extend the schedule interval. Does anyone know a solution to this challenge?
Hello, I am trying to blacklist winevent code 4679 by   TaskCategory=Kerberos Service Ticket Operations.  This regex is not working.  blacklist7 = EventCode="4769" TaskCategory="\w+\s\w+\s\w... See more...
Hello, I am trying to blacklist winevent code 4679 by   TaskCategory=Kerberos Service Ticket Operations.  This regex is not working.  blacklist7 = EventCode="4769" TaskCategory="\w+\s\w+\s\w+\s\w+" Ive also tried  blacklist7 = EventCode="4769" TaskCategory="Kerberos Service Ticket Operations"
Hello, I have tried numerous configurations to get my Splunk Universal Forwarder to connect to my Splunk Enterprise instance with no luck. I am trying to forward data to my indexer located on port 33... See more...
Hello, I have tried numerous configurations to get my Splunk Universal Forwarder to connect to my Splunk Enterprise instance with no luck. I am trying to forward data to my indexer located on port 3389 with the only info in the logs reading WARN AutoLoadBalancedConnectionStrategy [136236 TcpOutEloop] - Cooked connection to ip=XX.XX.XX.XX:3389 timed out I have checked telnet with that port in both directions and the connection is successful. Any advice would be appreciated
December 2023 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another edition of indexEducation. Oh, but this month we’ve got a fun holiday edition. It... See more...
December 2023 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another edition of indexEducation. Oh, but this month we’ve got a fun holiday edition. It’s our way of wrapping up the year and sharing our thanks to you for being the best community of users and learners on the planet. Until next year, we leave you with this Splunky rendition of an old holiday classic. The 12 Days of Splunk-mas   On the first day of Splunk-mas, my true love gave to me     ~ A Catalog of Classes for Free ~ Learn anywhere, anytime – for free   *** On the second day of Splunk-mas, my true love gave to me    ~ Two Ways to Learn it ~ Instructor-led and self-paced classes *** On the third day of Splunk-mas, my true love gave to me   ~ Three Ways to Fast Start ~  We offer Fast Start bundles *** On the fourth day of Splunk-mas, my true love gave to me   ~ Four Classic T-Shirts ~ Get rewarded with Splunk swag for paid learning  *** On the fifth day of Splunk-mas, my true love gave to me   ~ Five Golden Badges ~ Validate your expertise with Splunk Certification badges *** On the sixth day of Splunk-mas, my true love gave to me   ~ Six Ways to Prove It ~ Discover how proficiency in Splunk has career benefits *** On the seventh day of Splunk-mas, my true love gave to me   ~ Seven SOCs Securing ~   Take Splunk Security courses for the SOC analyst *** On the eighth day of Splunk-mas, my true love gave to me:   ~ Eight Labs a Launching ~   Dozens of instructor-led and self-paced courses using hands-on labs *** On the ninth day of Splunk-mas, my true love gave to me   ~ Nine ALPs a Teaching ~  Authorized Learning Partners (ALPs) across the globe provide localized learning *** On the tenth day of Splunk-mas, my true love gave to me   ~ Ten Freshmen Filtering ~      Splunk Academic Alliance offers Splunk training at the university level *** On the eleventh day of Splunk-mas, my true love gave to me   ~ Eleven Observations ~ Splunk Observability provides insights and our O11y courses show you how  *** On the twelfth day of Splunk-mas, my true love gave to me   ~ Twelve Bootcamps Drumming ~ Attend Splunk University to get hands-on-keyboard experience ***   Thanks for sharing a few minutes of your day with us and this special holiday edition of the indexEducation newsletter. See you next year!   Answer to Index This: A Splunky rendition of an old holiday classic.  
We are using OpenShift version 4.13.24 and it is actually on the ROSA AWS managed solution. I've been looking at some metrics for the splunk-otel-collector-agent pods that we have running, and in par... See more...
We are using OpenShift version 4.13.24 and it is actually on the ROSA AWS managed solution. I've been looking at some metrics for the splunk-otel-collector-agent pods that we have running, and in particular we review kubernetes metrics with Dynatrace. The alerts I am seeing are "High CPU Throttling" which basically translates into the CPU Throttling metric being nearly at the same level, or at the same level, as the CPU Usage metric. The pods are configured for Splunk Platform For these pods, I reviewed the YAML for the running instance and we include the following configuration: - resources:     limits:       cpu: 200m       memory: 500Mi     requests:       cpu: 200m       memory: 500Mi   As a workaround I was thinking to increase the cpu value under requests (and limits), however I haven't tried this yet. Has anyone else observed high CPU throttling issues? Thank you.
Hello, I need some help. Icreate a csv file on remote server from a mysql quert. I forward the csv file from the remote server to splunk. I can read the data. The csv file is over written each day... See more...
Hello, I need some help. Icreate a csv file on remote server from a mysql quert. I forward the csv file from the remote server to splunk. I can read the data. The csv file is over written each day, it have have only 1 line of data, or multiple lines of data - it is a list of device that have gon down. If no devices are down, the the file only has the hearder, and data that says: :No Devices Down:" I  only want to see data from the file on the day the file is writtern. The challenge I have is to read only the data in the file for that day. The issue is that splunk indexes the data, so splunk retains the data over time, like I want only 1 day info from the file, but splunk has all the data indexed How can I return only the data for the day, not for all data in splunk indes? thanks, EWHolz
Hi how can I download splunk apm on premises?  FYI: I don’t want use cloud version     Thanks 
I have a key called message Inside the value are several results but I need to only extract one result in the middle of the results. Sample: message:  template: 1234abcd, eeid: 5678efgh, consumeri... See more...
I have a key called message Inside the value are several results but I need to only extract one result in the middle of the results. Sample: message:  template: 1234abcd, eeid: 5678efgh, consumerid: broker My rex is below but returns the template value but also the results for eeid  and consumerid when I only need the template value of 1234abcd. | rex field=message "template: (?<TemplateID>[^-]+)"
Our system has a lot of Reports defined and I'm tasked with cleaning them up. The first thing I want to do is determine when each was last used. I found some searches that are supposed to help, but t... See more...
Our system has a lot of Reports defined and I'm tasked with cleaning them up. The first thing I want to do is determine when each was last used. I found some searches that are supposed to help, but they are too old or something, results are invalid (e.g. I am getting back Alerts and Searches when I want only Reports). Out of 199 Reports 7 are scheduled so I can guess when they ran last. Can someone show me a search that returns Reports each with their last run date?  thanks!
Hi, is there a way we can check the dashboard load time ? for example, if i choose today timestamp and hit the submit. how long it takes the panels to return the data for today timestamp? Thanks, ... See more...
Hi, is there a way we can check the dashboard load time ? for example, if i choose today timestamp and hit the submit. how long it takes the panels to return the data for today timestamp? Thanks, Selvam.
I'm sending $phrase$ in an email notification but they don't make it through because Splunk assumes they are variables. Is there a way to send these without Splunk recognizing them as a variable?  T... See more...
I'm sending $phrase$ in an email notification but they don't make it through because Splunk assumes they are variables. Is there a way to send these without Splunk recognizing them as a variable?  Thanks
I have an index set up that holds a number of fields, one of which is a comma separated list of reference numbers and I need to be able to search within this field via a dashboard. This is fine for ... See more...
I have an index set up that holds a number of fields, one of which is a comma separated list of reference numbers and I need to be able to search within this field via a dashboard. This is fine for a single reference as we can just search within the field and on the parameter on the dashboard prefix/suffix with wildcards but for multiple values, which can be significant, I can not see a way of searching While I have looked at |split and In neither seem to provide what I need though that may be down to what I tried.  Example data:  Keys="272476, 272529, 274669, 714062, 714273, 845143, 851056, 853957, 855183" I need to be able to enter in any number of keys, in any order, and find any records that contain ANY of the keys - not all of them in a set order. So for the above it should return if I search for (853957) or (855183,  714062) or (272476, 714062, 855183) Is anyone able to point me towards a logical solution on this - it will be a key aspect of our use of SPLUNK to enable users to copy/paste a list of reference numbers and assess where these occur in our logs. 
I need to register for certification in Pearson Vue. It is required to fill first name and last name in registration form according to Govt Id. My Govt id has only first name. There is no last name i... See more...
I need to register for certification in Pearson Vue. It is required to fill first name and last name in registration form according to Govt Id. My Govt id has only first name. There is no last name in it. What should be mentioned in last name column in this scenario? Please help.