All Topics

Top

All Topics

Hi Team,  I 'm new to Splunk and need little guidance with fixing errors that occurred when I uploaded a directory < .var/log >--from ubuntu to monitor  ------------------------------------------... See more...
Hi Team,  I 'm new to Splunk and need little guidance with fixing errors that occurred when I uploaded a directory < .var/log >--from ubuntu to monitor  ------------------------------------------------------------------------------------------------------------------------------- Health Status of Splunkd    Real-time Reader-0 Root Cause(s): The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. Generate Diag?More infoIf filing a support case, click here to generate a diag. Last 50 related messages: 02-04-2023 20:02:25.936 -0800 WARN TailReader [4979 tailreader0] - Could not send data to output queue (parsingQueue), retrying... 02-04-2023 20:02:25.910 -0800 WARN TailReader [4980 batchreader0] - Could not send data to output queue (parsingQueue), retrying... 02-04-2023 20:02:20.904 -0800 WARN TailReader [4979 tailreader0] - Enqueuing a very large file=/var/log/auth.log.1 in the batch reader, with bytes_to_read=9885261283, reading of other large files could be delayed 02-04-2023 20:02:20.875 -0800 INFO TailReader [4979 tailreader0] - Ignoring file '/var/log/wtmp' due to: binary 02-04-2023 20:02:19.846 -0800 INFO TailReader [4966 MainTailingThread] - State transitioning from 1 to 0 (initOrResume). 02-04-2023 20:02:19.846 -0800 INFO TailReader [4966 MainTailingThread] - State transitioning from 1 to 0 (initOrResume). 02-04-2023 20:02:19.844 -0800 INFO TailReader [4980 batchreader0] - batchreader0 waiting to be un-paused 02-04-2023 20:02:19.844 -0800 INFO TailReader [4980 batchreader0] - Starting batchreader0 thread 02-04-2023 20:02:19.844 -0800 INFO TailReader [4980 batchreader0] - Registering metrics callback for: batchreader0 02-04-2023 20:02:19.844 -0800 INFO TailReader [4979 tailreader0] - tailreader0 waiting to be un-paused 02-04-2023 20:02:19.844 -0800 INFO TailReader [4979 tailreader0] - Starting tailreader0 thread 02-04-2023 20:02:19.844 -0800 INFO TailReader [4979 tailreader0] - Registering metrics callback for: tailreader0 splunkd Data Forwarding   File Monitor Input Forwarder Ingestion Latency Ingestion Latency Large and Archive File Reader-0 Real-time Reader-0 Index Processor   Resource Usage   Workload Management  
Numeral system macros for Splunk v1.1.1 Bytes to printing Human readable size (e.g. 4KiB, 1023.4MiB, 23.4GiB, 345,67TiB) Sometimes it is necessary to divide Bytes by powers of 1024 and convert it t... See more...
Numeral system macros for Splunk v1.1.1 Bytes to printing Human readable size (e.g. 4KiB, 1023.4MiB, 23.4GiB, 345,67TiB) Sometimes it is necessary to divide Bytes by powers of 1024 and convert it to human readable units. In that case, it would not look good if you write that calculation in the SPL each time and the SPL becomes long, so I think it would be good if we could use a common macro to make it simple. For this purpose, I added 2 macros to Numeral system macros for Splunk v1.1.1. numeral_binary_symbol(bytes) Binary symbol. KiB, MiB, GiB, TiB, PiB, EiB, ZiB, YiB, RiB, QiB numeral_binary_symbol(bytes,digits) Binary symbol with arg for rounding digits. For other macros provided, click here Usage1 | makeresults count=35 ```THIS SECTION IS JUST CREATING SAMPLE VALUES.``` | streamstats count as digit | eval val=pow(10,digit-1), val=val+random()%val | foreach bytes [eval <<FIELD>>=val] | table digit val bytes | fieldformat val=tostring(val,"commas") ```THE FOLLOWING LINES MAY BE WHAT ACHIEVES THE FORMAT YOU ARE LOOKING FOR.``` | fieldformat bytes=printf("% 9s",`numeral_binary_symbol(bytes,1)`)   Usage2 Example of sorting sourcetypes in descending order of throughput. index="_internal" source="*metrics.log" per_sourcetype_thruput | stats sum(eval(kb*1024)) AS bytes by series ```THE FOLLOWING LINES MAY BE WHAT ACHIEVES THE FORMAT YOU ARE LOOKING FOR.``` | fieldformat bytes=printf("% 10s",`numeral_binary_symbol(bytes,2)`) | sort 0 - bytes Points :  The internal value is still in Byte. Sortable. The kb information can be converted to bytes and a common macro can be used. Since the fieldformat retains the original value internally, the MiB and KiB displays can also be used for sorting, with the values being comparable. Why weird units KiB, MiB using instead of KB, MB? As a side note, in the general public, the definition of “kilo” is 1000 and has no other meaning, but in the computer world, it has long been a common belief that KB (Killo Byte) is 1024 bytes to the 10th power of 2, as if it were common knowledge in the industry. However, this is definitely a source of confusion, so standards such as IEC 60027-2, IEEE 1541-2002, and IEC 80000-13:2008 defines the KiB (Kibibyte) and MiB (Mebibyte) units as byte units based on 1024 bytes to avoid confusion. These units are not at all widespread and unfamiliar to us, but since confusion over numbers is a source of misunderstanding, I dared to use these units in that macros in order to avoid misunderstanding and to have a common understanding in Splunk’s output. Enjoy Splunking!    
I have 40 Windows 2012 domain controllers (forwarding through heavy forwarders to cloud), that intermittently stop sending  "WinEventLog:Security" events to cloud indexers. In some cases, one of the ... See more...
I have 40 Windows 2012 domain controllers (forwarding through heavy forwarders to cloud), that intermittently stop sending  "WinEventLog:Security" events to cloud indexers. In some cases, one of the servers will send Security events for a few hours and then stop sending altogether. I know the events exist on the server because I can see them through Event Viewer. On the other hand, I don't have the same issue with the Application or System events. They flow all the time. The issue only happens with "WinEventLog:Security" events. So far, I have tried to split the load among 4 heavy forwarders, thinking it was a forwarder congestion issue. I also configured the domain controllers to send directly cloud, bypassing the heavy forwarders. Alas, no success.  Has anyone experienced or heard about this issue? Thank you.
I am attempting a lab for a class, following the lab's instructions I have come to an loop where I have set up Asset Discovery, and enabled the inputs, but the app will not return any information. An... See more...
I am attempting a lab for a class, following the lab's instructions I have come to an loop where I have set up Asset Discovery, and enabled the inputs, but the app will not return any information. Any way I can think to access the app it takes me directly to the continue app setup page. Did I miss something in the setup? I have restarted splunk itself, but not the ubuntu server. Should I wait longer for nmap to finish?
Hey All,    I'm really struggling here.  I'm trying to get a universal forwarder to pull in txt logs, and edit the "host" field based on the filename/file path. Example file path: C:\SCAP_SCA... See more...
Hey All,    I'm really struggling here.  I'm trying to get a universal forwarder to pull in txt logs, and edit the "host" field based on the filename/file path. Example file path: C:\SCAP_SCANS\Sessions\2023-02-04_1200\SERVER-test_SCC-5.7_2023-02-04_111238_Non-Compliance_MS_Windows_10_STIG-2.7.1.txt   Inputs.conf stanza: [monitor://C:\SCAP_SCANS\Sessions] disabled = false ignoreOlderThan = 90d host_regex = [^\\\]+(?=_SCC) SHOULD_LINEMERGE = true MAX_EVENTS = 500000 index = main source = SCC_SCAP_TXT sourcetype = SCC_SCAP_TXT whitelist = (Non-Compliance).*\.(txt)   Tried a few different regex's.  Checked btool to make sure there aren't any configs overwriting settings.  Tried with and without transforms and props files.  Verified regex works using the path and a makeresults query. Anyone have any suggestions?
Here is the original table here, but I need to put some dummy data into Field_B  Time Filed_A Field_B 1 10 Tom 2 20 Smith 3 30 Will 4 40 Sam ... See more...
Here is the original table here, but I need to put some dummy data into Field_B  Time Filed_A Field_B 1 10 Tom 2 20 Smith 3 30 Will 4 40 Sam Like this, Time Filed_A Field_B 1 10 DUMMY1 2 20 DUMMY2 3 30 Tom 4 40 Smith I want to expect the order of Filed_B will be : DUMMY1,DUMMY2,Tom,Smith,Will,Sam... Please advise me on how to write the eval command to do this...
Hello everybody!   I am researching to integrate splunk with my service running in Docker. In my case, the splunk enterprise runs on a different host. One way to achieve this is to use docker's... See more...
Hello everybody!   I am researching to integrate splunk with my service running in Docker. In my case, the splunk enterprise runs on a different host. One way to achieve this is to use docker's built-in splunk logging driver. I see that one of the configuration parameter is "splunk-token": "" which is the splunk Http Event Collector token that needs to be created in the Splunk enterprise. My question is - Would I be required to create separate HEC tokens for each of the microservice projects. Let's say, if we have 10 microservices projects running which need to integrate with Splunk. Does that mean I would have to create 10 different tokens in Splunk enterprise?
Today we have an onprime cluster with physical index servers. Usually our disks fail, leaving the cluster performance down. Any query that would help catch disk error?
I am writing a query to correlate across two different indexes. One index has userID field. I want the query to match a field in the second index and output additional fields from the second index. ... See more...
I am writing a query to correlate across two different indexes. One index has userID field. I want the query to match a field in the second index and output additional fields from the second index. index 'idx1' has field name usr. For the sake of this example, there is a user called 'jdoe' index 'idx2' has a field name called user, which contains 'jdoe' along with another field called account ID, which has the name spelled out 'John Doe'. I want the query to use the usr field content from idx1 and use that info to pull the contents of 'account ID' field in index. What's the best way to accomplish this?
I've created tokens with local authentication and shared the ID and password with the users of the applications needing access. So a single userid, mapped to a role, works for multiple users running ... See more...
I've created tokens with local authentication and shared the ID and password with the users of the applications needing access. So a single userid, mapped to a role, works for multiple users running Grafana, in this case. I'd like to do the same thing using SAML. Is it possible (and reasonable) to create a "generic token", for lack of a better phrase, and provide that to the custom dashboard users? It may still be Grafana, or maybe some other mechanism to yet be determined, passing a search string to splunk.  For example, if I had 10 users to configure to use API access, it would seem that I'd need to generate 10 tokens, based on what I get out of the documentation. Can anyone direct me? Thank you in advance.  
Background: I am sending data to Splunk Cloud through an Intermediate Forwarder, which is a universal forwarder from multiple source instances (both in Pacific time and UTC) that do not support HEC ... See more...
Background: I am sending data to Splunk Cloud through an Intermediate Forwarder, which is a universal forwarder from multiple source instances (both in Pacific time and UTC) that do not support HEC and the correct TLS versions. As of now, this is the only way of sending the logs.  Sources (PT and UTC) > Intermediate Forwarder > Splunk Cloud Problem:  Some of the source instances are in Pacific time zone and the intermediate forwarder is in UTC. The logs that are coming from the instances that are in PT, are showing up in UTC time in Splunk Cloud. So Splunk Cloud is showing that the logs are from a time earlier than the they really got generated.  I should also say that the logs have a timestamp (PT - correct time) in them, always within the first 70 characters or so at most. That is the time I want them to be shown in.  Also changing the intermediate forwarder timezone to PT fixes the issue for the instances in PT but messes up the instances that are in UTC.  As a solution, I saw that timezones can be configured in the props.conf file, located in (after creation) /opt/splunkforwarder/etc/system/local directory.  Here is what it looks like for me:    [host::<some of the hosts>] TZ = US/Pacific   I have tried adding the following argument in the stanza (according to this post https://community.splunk.com/t5/Getting-Data-In/Universal-Forwarder-and-props-conf-and-transforms-conf/m-p/39732/highlight/true#M7401) but it is of no help.    force_local_processing = true    However, this just does not work. I can see logs in Splunk Cloud but they are in UTC.  I have also tried using <sourcetype> and source::<source> instead of [host::<myhost>] but it doesn't do anything.  Any other things that I can try?
I have a bunch of panels for distinctive purposes. (say about 100-200). In my use case scenario: I would like to be able to share these panels with other users so they can create their own dashboar... See more...
I have a bunch of panels for distinctive purposes. (say about 100-200). In my use case scenario: I would like to be able to share these panels with other users so they can create their own dashboards. OR Is there a way to generate dashboards using some Splunk Sdk or some service as a code? Like if I want a dashboard to contain 10 panels, can i just use the panel source codes and combine them into a dashboard in a way thats reliable and wont cause issues on dashboard imports? Would be happy to clarify more if needed. Thanks.
Hello Splunkers , I wrote a python script that explores the splunk-var indexes and calculates their total size, and then asks the user if they’d like to back it up. After the user indicates which... See more...
Hello Splunkers , I wrote a python script that explores the splunk-var indexes and calculates their total size, and then asks the user if they’d like to back it up. After the user indicates which indexes they’d like to back up, it copies all buckets and other metadata in the db path (excluding the hot bucket) to a dir that is specified as a command line arg. I want to know How to actually back up files (is it as simple as copying out the dir and then later copying it in and restarting splunk) Best implement bucket policies (maxHotSpanSecs) Understand bucket rollover when we have unexpected behavior What indexes.conf should  I use to have the bucket have one day worth of data   Thanks in Advance
In Splunk monitoring console  Forwarders: Deployment panel has duplicate entries one is active with latest version and other old version as missing. How i can remove old dated missing entries form h... See more...
In Splunk monitoring console  Forwarders: Deployment panel has duplicate entries one is active with latest version and other old version as missing. How i can remove old dated missing entries form host without building the forwarder asset table.  Suggestions please? sample :  
I have the raw data in format : {"col1":"1",{col2":"2"},{.........(continue) which if I have to visualize using https://codebeautify.org/string-to-json-online : Object{1} ->a{4} col1: 1 ... See more...
I have the raw data in format : {"col1":"1",{col2":"2"},{.........(continue) which if I have to visualize using https://codebeautify.org/string-to-json-online : Object{1} ->a{4} col1: 1 col2: 2 col3: 3 col4: 4 ->b[3] ->0{3} col5:"5 col6[0] 0:6 ->1{3} col5: "55" col6[1] 0:66 ->2{3} col5: 55 col6[1] 0:666  And if my Splunk query is like  index="api" | rename a.col1 as "col1",a.col2 as "col2", b{}.col5 as "col5", b{}.col6{} as "col6" | table "col1","col2","col5","col6" it display me: col1 col2 col5 col6 1 2 5 55 6 66 666 Moreover , if I export it in csv It only shows me first value of array(multi-value) col1 col2 col3 col4 1 2 5 6   but should be like : (each row 1:1 mapped) MY DESIRED TABLE col1 col2 col5 col6 1 2 5 6 1 2 55 66 1 2 55 666  
One of my field in raw data is multivalue(like array) . I can see those values in a column in Splunk , but when I try to export them to csv then only the 1st value gets copied and rest disappears .... See more...
One of my field in raw data is multivalue(like array) . I can see those values in a column in Splunk , but when I try to export them to csv then only the 1st value gets copied and rest disappears . eg: In Splunk col1 val1 val2 val2 val3 val4   While exporting col1 val1 val2
Hello, I currently have an intake that is exceeding 100GB per day and I would like to know what are the best practice recommendations to support this intake without affecting performance. How man... See more...
Hello, I currently have an intake that is exceeding 100GB per day and I would like to know what are the best practice recommendations to support this intake without affecting performance. How many servers or indexers are needed and their minimum and recommended specifications?
Hi Team, We are exploring the Splunk cloud. We need below clarification. 1) Is AWS Splunk Cloud instance supports common information model (CIM) ? 2) Is Splunk enterprise security included into... See more...
Hi Team, We are exploring the Splunk cloud. We need below clarification. 1) Is AWS Splunk Cloud instance supports common information model (CIM) ? 2) Is Splunk enterprise security included into AWS Splunk cloud license ? 3) Can we make the search api call from other application to get the AWS Splunk Cloud indexed data (CIM supported) ? 4) Can You provide a demo of AWS Splunk Cloud (Saas).
  i have setup on prem new SH cluster and Deployment server with Splunk enterprise version 8.2.5. I have configure new 3 SH as slave and pointed to License Master but Salve not syncing with Licen... See more...
  i have setup on prem new SH cluster and Deployment server with Splunk enterprise version 8.2.5. I have configure new 3 SH as slave and pointed to License Master but Salve not syncing with License Master. Note: We have three license pool in License master and I have update pool stanza in server.conf as well but no luck. Please suggest.   I have performed below config steps on server.conf on 3 SH and Deployment Host Separately.  Select a new passcode to fill in for pass4SymmKey. SSH to the Splunk instance. Edit the /opt/splunk/etc/system/local/server.conf file. Under the [general] stanza pass4SymmKey field, replace the hashed value with the new passcode in plain text. It will stay in plain text until Splunk services are restarted. Save the changes to the server.conf file. Restart Splunk services on that node.     Here is the server.conf on SH as License Slave. ----------------------------------------------------------------------------------------------------------------------------------------------- [general] serverName = SHHost123 pass4SymmKey = Same is License Master   [license] master_uri = https://x.x.x.x:8089 active_group = Enterprise   [sslConfig] sslPassword = 12344…   [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial quota = MAX slaves = * stack_id = download-trial   [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder   [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free   [lmpool:auto_generated_pool_enterprise] description = auto_generated_pool_enterprise1 quota = MAX slaves = * stack_id = enterprise   [replication_port://9023]   [shclustering] conf_deploy_fetch_url = http://x.x.x.x:8089 disabled = 0 mgmt_uri = https://x.x.x.x:8089 pass4SymmKey = 23467…. shcluster_label = shclusterHost_1 id = D6E63C0A-234S-4F45-A995-FDDE1H71B622
Hello, I am trying to install SSL certificate for Splunk to permit HTTPS access to the console. As part of the procedure, I have generated the CSR, Key and the signed PEM certificates. I uploaded t... See more...
Hello, I am trying to install SSL certificate for Splunk to permit HTTPS access to the console. As part of the procedure, I have generated the CSR, Key and the signed PEM certificates. I uploaded the files to the Splunk host and created (and edited) the server.conf with the following information [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/mySplunkWebPrivateKey.key serverCert = /opt/splunk/etc/auth/mycerts/mySplunkWebCertificate.pem I also disabled the [sslConfig] stanza in server.conf. When I try to restart Splunk, the service fails with the following errors. Please advise on how to fix the issue. WARNING: Cannot decrypt private key in "/opt/splunk/splunk/etc/auth/mycerts/mySplunkWebPrivateKey1.key" with> Feb 03 17:09:16 frczprmccinfsp1 splunk[1559898]: WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifySer>   Thanks in advance. Siddarth