All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In Distributed Clustered Deployment with SHC - Multisite (M4 / M14) model, is there any additional license required ? Is there any downtime require while connection Site A with Site N with Multisite... See more...
In Distributed Clustered Deployment with SHC - Multisite (M4 / M14) model, is there any additional license required ? Is there any downtime require while connection Site A with Site N with Multisite (M4 / M14)  model ?  
Dear All, Is there any delay option in Splunk multisite M4/M14? Requirement:  Site A is Active site and Site N passive site. Data ingestion from Active site should be in real time and data from sit... See more...
Dear All, Is there any delay option in Splunk multisite M4/M14? Requirement:  Site A is Active site and Site N passive site. Data ingestion from Active site should be in real time and data from site N would be Ingest at 1 AM every day.  Is there any option in mu
Hi experts, For Splunk App for Data Science and Deep Learning is it possible at all to build custom docker image native for M1 or M2 Mac? Currently available ones on Docker hub are for x86 architect... See more...
Hi experts, For Splunk App for Data Science and Deep Learning is it possible at all to build custom docker image native for M1 or M2 Mac? Currently available ones on Docker hub are for x86 architecture. Running them in M1 Docker environment has been problematic. All notebooks that use transformers and keras crash the kernel upon import.  Also, if possible to leverage native M1and/or M2 GPU will also be useful. Any plan for native docker image build supprot for M1 and/or M2? Thanks, MCW
Hi, I am trying to install Splunk SOAR (On-premises) as an unprivileged user on CentOS 7.9 and when I am running ./soar-prepare-system script I get the following error message: ./usr/python39/bin/p... See more...
Hi, I am trying to install Splunk SOAR (On-premises) as an unprivileged user on CentOS 7.9 and when I am running ./soar-prepare-system script I get the following error message: ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0) ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.26' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0) ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.27' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0) ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0)  I tried to install it on CentOS 8 and got another error said Unable to read CentOS/RHEL version from /etc/redhat-release.  Any sugestions?
Hello I am building a new search head cluster, the cluster work fine, however the deployer is throwing error whenever I run "/bin/splunk show shcluster-status" on the deployer Here is error I am ... See more...
Hello I am building a new search head cluster, the cluster work fine, however the deployer is throwing error whenever I run "/bin/splunk show shcluster-status" on the deployer Here is error I am getting "Encountered some errors while trying to obtain shcluster status. Search Head Clustering is not enabled on this node. REST endpoint is not available" my server.conf look like the following [shclustering] pass4SymmKey = xxxxxxxxx   Your input  be appreciated
I'm getting these failures after a prior disk corruption. ERROR TailReader [1876879 tailreader0] - Ignoring path="/somepath/somefile" due to: BTree::Exception: unexpected resulting offset 0 while on... See more...
I'm getting these failures after a prior disk corruption. ERROR TailReader [1876879 tailreader0] - Ignoring path="/somepath/somefile" due to: BTree::Exception: unexpected resulting offset 0 while on order 1, starting from offset 2056 node offset: 53209432 order: 255 keys: { } children: { 0 } I thought I had ejected buckets affected by the corruption; in addition, recent ingestions all go into buckets created after the corruption. What can I do to fix this?
I have an inputlookup called adexport.csv thats big... trying to join and match two fields in the lookup UserName and with the splunk field UserId.   trying but this don't seem to work. Tried a va... See more...
I have an inputlookup called adexport.csv thats big... trying to join and match two fields in the lookup UserName and with the splunk field UserId.   trying but this don't seem to work. Tried a variation of join and append, my splunk foo is dead index=data | lookup adexport.csv UserName as UserId OUTPUT UserId Title | table _time UserId Title    
  When using Pplunks  security essentials :  MITRE ATT&CK Framework  we are lacking a significant amount of alerts.  we used to have around 1500 in active and 300 ish on needs data; however, overnig... See more...
  When using Pplunks  security essentials :  MITRE ATT&CK Framework  we are lacking a significant amount of alerts.  we used to have around 1500 in active and 300 ish on needs data; however, overnight drop to the 200 mark total (between active and needs data) .  The following troubleshooting steps have been taken  1. updated content with the "force update under system configuration". 2. verify communication to the urls (yes it can connect) 3. uninstall and reinstall current SSE version, this cleared the data mapping upon installed it showed  enabled 0-active-0- missing data 1715: after the weekend it dropped to 0-8-195      4. After i rebuilt the data inventory  it looked as such:   Here are some SS of the security content:   1. shows content  2. drop down shows 12 mitre attack platforms but the dropdown is all 0;s   3.  Some times the data sources would show a filter of none. with 1300+  items, like the item below 134,  and sometimes it just doesnt appear.      4. MITRE map missing from the  configuration tags           
Is it possible to download the splunkcloud.spl file by using curl?
Hi We are Getting "GnuTLS handshake retry returned error" when try to communicate with ForeScout".   Any suggestion 
I am using a curl command to get data from an api endpoint, the data comes as a single event but I want to be able to store each event as the events come through. I want to get a timechart from that ... See more...
I am using a curl command to get data from an api endpoint, the data comes as a single event but I want to be able to store each event as the events come through. I want to get a timechart from that  
I built a new index intended for storing a report of some very heavily modified and correlated vulnerability data. I figured the only way to get this data to properly math the CIM requirements was th... See more...
I built a new index intended for storing a report of some very heavily modified and correlated vulnerability data. I figured the only way to get this data to properly math the CIM requirements was through a lot of evals and lookup correlations. After doing all of that I planned on spitting it back into a summary index and have that be part of the Vulnerability data model.   Anyway, I scheduled the report and enabled summary indexing but my new index doesn't show up in the list of index. I noticed a few indexes are missing from the list. And also the filter doesn't even work lol. indexes that are clearly visible in the list do not filter in when you type the name of the index. Very strange.   I'm an admin and I've done this a few times previously. This particular index is just giving me issues. Not sure what I need to do besides delete it and rebuild it.
Hi all, My question is regarding ADMT log ingestion to splunk. ADMT logs are sent to a common blob storage account & storage account is created and it has logs in excel format. In splunk web >> Con... See more...
Hi all, My question is regarding ADMT log ingestion to splunk. ADMT logs are sent to a common blob storage account & storage account is created and it has logs in excel format. In splunk web >> Configuration done by adding the Azure storage account in the addon "Splunk Add-on for Microsoft Cloud Services" by mentioning account secret key. Added Input (Azure storage blob) by selecting the above added storage account. Now, I can able to see the log in splunk, but the format seems wrong.   Attaching the sample event here. Can you advice what needs to be done to get the logs in correct format.  Thanks  
I am trying to get a table showing the number of days a user was active in the given time period.  I currently have a working search that gives me the number of total logins for each user and one tha... See more...
I am trying to get a table showing the number of days a user was active in the given time period.  I currently have a working search that gives me the number of total logins for each user and one that gives me the number of unique users per day.  I am looking for "unique days per user".  ie. if Dave logs in 5x Monday, 3x Tuesday , 0x Wednesday, 2x Thursday, & 0x Friday I want to show 3 active days not 10 logins
We deployed our first Splunk in AWS using the tooling in AWS to do this and see that there is unallowed traffic calling out from our Splunk to beam.scs.splunk.com, which I see is for Splunk Cloud Edg... See more...
We deployed our first Splunk in AWS using the tooling in AWS to do this and see that there is unallowed traffic calling out from our Splunk to beam.scs.splunk.com, which I see is for Splunk Cloud Edge processor, which we have no intention to use.  We would like to disable this traffic but there is no documentation on it.  Can this be done?
When trying to access documentation for add on 3088 which should be on  https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/About I am redirected to  https://github.com/pages/au... See more...
When trying to access documentation for add on 3088 which should be on  https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/About I am redirected to  https://github.com/pages/auth?nonce=cdd5c03e-1d79-4996-9ec8-36e50189986b&page_id=47767351&path=Lw  Which is unavailable without login and also with one of my github logins.  What's going on here?
I have a event that are generated in csv format with timestamp within file name as mentioned below. Need to extract timestamp from the file and create new column as _time. Need rex query to extract t... See more...
I have a event that are generated in csv format with timestamp within file name as mentioned below. Need to extract timestamp from the file and create new column as _time. Need rex query to extract the YYYY-MM-DD HH:MM:SS.   D:\automation\miscprocess\test_utilization_info_20240618_195509.csv
Hello, For the internal indexes of the search head, should we send them to be stored on the indexers? If so, can we send them to both indexers without them being in a cluster?  Additionally, I ha... See more...
Hello, For the internal indexes of the search head, should we send them to be stored on the indexers? If so, can we send them to both indexers without them being in a cluster?  Additionally, I have installed the add-on on the search head, and the index where the collected data is stored is located on the search head at the following path: /opt/splunk/etc/apps/search/local/indexes.conf. How can I direct this index to both indexers that are not in a cluster?
Hello Community, i hope you can support. I have a CloudFoundry Environment which send all logs to my splunk-forwarder on which i have installed syslog-ng 4.6. On the Splunk Server Side the Splunk Ap... See more...
Hello Community, i hope you can support. I have a CloudFoundry Environment which send all logs to my splunk-forwarder on which i have installed syslog-ng 4.6. On the Splunk Server Side the Splunk App for RFC5424 has been installed and configured as documented. My current syslog-ng.conf (without RFC5424) looks as follows (with syslog-ng 3.23):     @version:3.23 options { flush_lines(0); time_reopen(10); log_fifo_size(16384); chain_hostnames(off); use_dns(no); use_fqdn(no); create_dirs(yes); keep_hostname(yes); owner(); dir-owner(); group(); dir-group(); perm(-1); dir-perm(-1); keep-timestamp(no); threaded(yes); }; source s_tcp555 { tcp (ip("0.0.0.0") port(555) keep-alive(yes) max-connections(100) log-iw-size(10000)); }; destination env_logs { file("/var/log/syslog2splunk/env/${LOGHOST}/${HOST}/${YEAR}-${MONTH}-${DAY}_${HOUR}.log" template("${UNIXTIME} ${MSGHDR} ${MESSAGE}\n") frac-digits(3) time_zone("UTC") owner("splunk") dir-owner("splunk") group("splunk") dir-group("splunk")); }; log { source(s_tcp514); destination(env_logs); };       The inputs.conf:     [default] host = my-splk-fwd index = <my-splk-index-xxx> [monitor:///var/log/syslog2splunk/env/*/*/*.log] disabled = false sourcetype = CF:syslog host_segment = 6 crcSalt = <SOURCE>       You see that my CloudFoundry Environment is sending syslog over port 514 to the splunk forwarder which is then shipping them to the splunk server. Now i have configured RFC5424 in syslog-ng.conf and also in the inputs.conf. My CF syslogs should be only formatted to RFC5424 and therefore i do not want to have in my syslog-ng.conf 2 sources/destinations and a new port. I would only like that my current syslogs will be formatted to rfc5424. But i also know that in the inputs.conf its not possible to configure 2 sourcetypes. So therefore i need to know how to configure those both files that my almost incoming syslog files will be formatted with rfc5424. I do not want to have two directories with exactly the same logs.   Here is my syslog-ng.conf (with syslog-ng 4.6):     @version: 4.6 options { flush_lines(0); time_reopen(10); log_fifo_size(16384); chain_hostnames(off); use_dns(no); use_fqdn(no); create_dirs(yes); keep_hostname(yes); owner(); dir-owner(); group(); dir-group(); perm(-1); dir-perm(-1); keep-timestamp(no); threaded(yes); }; source s_tcp514 { tcp (ip("0.0.0.0") port(514)); }; destination env_logs { file("/var/log/syslog2splunk/env/${LOGHOST}/${HOST}/${YEAR}-${MONTH}-${DAY}_${HOUR}.log" template("<${PRI}>1 ${ISODATE} ${HOST} ${PROGRAM} ${PID} ${MSGID} ${STRUCTURED-DATA} ${MESSAGE}\n") frac-digits(3) time_zone("UTC") owner("splunk") dir-owner("splunk") group("splunk") dir-group("splunk")); }; destination rfc5424_logs { file("/var/log/syslog2splunk/rfc5424/${LOGHOST}/${HOST}/${YEAR}-${MONTH}-${DAY}_${HOUR}.log" template("<${PRI}>1 ${ISODATE} ${HOST} ${PROGRAM} ${PID} ${MSGID} ${STRUCTURED-DATA} ${MESSAGE}\n") frac-digits(3) time_zone("UTC") owner("splunk") dir-owner("splunk") group("splunk") dir-group("splunk")); }; # Log routing log { source(s_tcp514); destination(env_logs); };     ==> do i need here to add an additional source/destination or is this conf ok? The new inputs.conf looks as follows:     [default] host = my-splk-fwd index = my-splk-index_xxx [monitor:///var/log/syslog2splunk/env/*/*/*.log] disabled = false sourcetype = ENV:syslog host_segment = 6 crcSalt = <SOURCE> [monitor:///var/log/syslog2splunk/rfc5424/*/*/*.log] disabled = false sourcetype = rfc5424_syslog host_segment = 6 crcSalt = <SOURCE>     With the syslog-ng.conf and inputs.conf i can see the source-type for rfc but from my opinion it is exactly the same output as before - so i do not recognize any difference.  
  Able to get event output in table format. But looking for eval condition: 1. Remove T from the timestamp and convert the below UTC/GMT to EST and need this in YYYY-MM-DD HH:MM:SS 2. And need the... See more...
  Able to get event output in table format. But looking for eval condition: 1. Remove T from the timestamp and convert the below UTC/GMT to EST and need this in YYYY-MM-DD HH:MM:SS 2. And need the time different between c_timestamp and c_mod and add the time difference in Timetaknen column.