All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We are trying to install the Splunk in D drive as we are having some storage issue in C drive. We are unable to modify the path for installation from C to D. Please help to overcome this issue.... See more...
Hi, We are trying to install the Splunk in D drive as we are having some storage issue in C drive. We are unable to modify the path for installation from C to D. Please help to overcome this issue.   Thanks, Developer.
Hi All,  I am trying to create a summary index that runs once in a week and I want only few fields to be populated in the summary Index.  Questions : 1) I want only three fields in Summary I... See more...
Hi All,  I am trying to create a summary index that runs once in a week and I want only few fields to be populated in the summary Index.  Questions : 1) I want only three fields in Summary Index - Test1 , Test2, Test3.              Can I use table command on these 3 fields  at end of my query and create a report to populate              Summary Index ? If I use fields command, it is not showing the above fields in my  Index ? Why is it ?  I want these fields to be in SI so that I can run different stats command and make use in my dashboard. 2) Also, I have used timerange of last  7 days  ( to summary index last 7 days data) but only first 3 days data is being written to SI ? I dont see any errors ? I googled this question but I am not getting exact answer, Can anyone please help me to understand this please.  Thanks in Advance. Newbie to Splunk
Hi guys, This is the first time I'm trying to install splunk universal forwarder (8.2.2.1) on an AIX (7.1) machine.  I have no previous experience with AIX, but have installed many on Linux/Unix m... See more...
Hi guys, This is the first time I'm trying to install splunk universal forwarder (8.2.2.1) on an AIX (7.1) machine.  I have no previous experience with AIX, but have installed many on Linux/Unix machines. The issue I seem to get is when I'm done installing, and splunk prompts for a user/password, the entire process hangs after I've input the username. The only way out is to kill the PID or exit the machine. Steps taken:  I have downloaded the tgz file, expanded the tar with: gunzip -c "filename.tar.gz" | tar -xvf, to the /opt folder Changed ownership with : chown -R splunk:splunk /splunk/splunkforwarder Switched to splunk user to install : su - splunk Run on $/SPLUNK_HOME/bin:  ./splunk start --accept license Select Y Enter username, and this is where is  stops   Any suggestions would be kindly appreciated   
Hi Team. I have a big ol search that tables a bunch of resource usage data. Now i smack and outputcsv on that badboy, and schedule it to run once a month. Before it runs next month i would like... See more...
Hi Team. I have a big ol search that tables a bunch of resource usage data. Now i smack and outputcsv on that badboy, and schedule it to run once a month. Before it runs next month i would like to run the search again , drag in the old search with inputcsv and then compare the two, and maybe only list the changes (And maybe how much it changes?)     (index="redacted" OR index="redacted2") EventCode=1011 | rex field=Message "\W(?<ServerName>\S+)\s\w+\W(?<PowerState>\S+)\s\w+\W(?<CpuCount>\S+)\s\w+\W(?<CoresPerSocket>\S+)\s\w+\W(?<GuestHostName>\S+)(:)(?<GuestOS>.+)(MemoryMB)\W(?<MemoryMB>\S+)\s\w+\W(?<ResourcePool>.+)(Version)\W(?<Version>\w+)\s\w+\W(?<UsedSpaceGB>\S+)\s\w+\W(?<ProvisionedSpaceGB>\S+)\s\w+\W(?<VMHost>\S+)\s\w+\W(?<Folder>.+)" | eval UsedSpaceGB = round(UsedSpaceGB,2) | eval ProvisionedSpaceGB = round(ProvisionedSpaceGB,2) | search VMHost="***" | table ServerName PowerState CpuCount CoresPerSocket GuestHostName GuestOS MemoryMB ResourcePool Version UsedSpaceGB ProvisionedSpaceGB VMHost Folder | dedup ServerName | search ServerName="*" | search VMHost="*" PowerState="*" ResourcePool="redacted "| outputcsv redacted_filename.csv     New search: inputcsv redacted_filename.csv lists the old search just fine, except it sorted the tablenames alphabetically, but whatever. Is there an easy way to compare the two, or will i have to extract all fields and compare manually?
  I have table visualization which contains the details of name, course and other details. when am clicking on the any value in the name column that value should pass through the url and it opens t... See more...
  I have table visualization which contains the details of name, course and other details. when am clicking on the any value in the name column that value should pass through the url and it opens the other dashboard to get the entire details of that name.  We have tried the $row.name.value$,$value$ and other syntax as well but no luck. can any one help me here. Set token in source code, passed the values in url too but not taking the value. app/search/dashboarddetails?form.name=$value$ app/search/dashboarddetails?form.name=$row.name.value$ etc., And do we have any constraints that only 2 or 3 token will be passed in the url?
Hello! Splunk newbie here - I was hoping to get some advice on how to condense this search query I have. Is there another command I can use that will make it so I don't need to have so many eval stat... See more...
Hello! Splunk newbie here - I was hoping to get some advice on how to condense this search query I have. Is there another command I can use that will make it so I don't need to have so many eval statements? What I'm trying to do with the data I have is remove any results that contain the words Okta, FIDO, Google, and Voice so I'm left with the users that have the MFA factors Password and SMS authentication. Thanks in advance!     source="test.csv" sourcetype="csv" | stats values("MFA Factor") as MFA, values("Last Enrolled_ISO8601") as "Last Enrolled", values( "Last Used_ISO8601") as "Last Used" by User, Login | eval MFA= if(like(MFA,"Okta%"),null, MFA) | eval MFA= if(like(MFA,"%FIDO%"),null, MFA) | eval MFA= if(like(MFA,"Google%"),null, MFA) | eval MFA= if(like(MFA,"Voice%"),null, MFA) | where isnotnull(MFA)      
Hello, We need to configure TA-mailclient, but having a separate account (username and password) to the mailbox. Do you know by any chance what parameter should we add to our inputs.conf?  Unfortu... See more...
Hello, We need to configure TA-mailclient, but having a separate account (username and password) to the mailbox. Do you know by any chance what parameter should we add to our inputs.conf?  Unfortunately, changing the format of mail to  [mail://<username>\<mailaddress>] or adding new parameter to the input stanza do not help.  W tried: [mail://thisisourmail.com] attach_message_primary = False host = host1 include_headers = True index = mail mailbox_cleanup = readonly mailserver = mailserver.zz maintain_rfc = False username = mail_username password = encrypted protocol = IMAP disabled = 0 and  [mail://mail_username\thisisourmail.com] attach_message_primary = False host = host1 include_headers = True index = mail mailbox_cleanup = readonly mailserver = mailserver.zz maintain_rfc = False password = encrypted protocol = IMAP disabled = 0 Best regards, Justyna  
Hi,  I want to get integrate CIsco ESA logs with splunk. we have syslog collector where UF is installed. Can anyone please help me with the documentation of integration.  
Hi, I have requirement where i need to configure the UF to send the data to two different deployment servers or in other terms to two different Splunk enterprise. We are doing this because the ap... See more...
Hi, I have requirement where i need to configure the UF to send the data to two different deployment servers or in other terms to two different Splunk enterprise. We are doing this because the application team data needs to be sent to two different project 'Splunk enterprise' and here one Splunk enterprise needs audit logs and other Splunk enterprise needs Infrastructure data. Based on compliance with Company Security Policy ,Each Splunk enterprise should have the control to manage their own logs while having control over their Deployment servers. Hence please let me know  if there is any approach where i am able to configure two deploymentclient.conf in one UF and send data to two different deployment servers.   Thank You! 
I want to hide columName from 2nd row onwards for below table <row> <panel> <title>STATS : SLI/SLO Dashboard count</title> <table> <search base="pubsubLatencyHighAckDelayDFBaseSearch"> <query>... See more...
I want to hide columName from 2nd row onwards for below table <row> <panel> <title>STATS : SLI/SLO Dashboard count</title> <table> <search base="pubsubLatencyHighAckDelayDFBaseSearch"> <query> | stats values(serviceName) as serviceName count(eval(error=="failure")) as failureCount count(eval(error=="warning")) as warningCount</query> </search> </table> <html depends="$dontshow$"> <style> #tableWithHiddenHeader1 thead{ visibility:hidden; display:none; } </style> </html> <table id="tableWithHiddenHeader1"> <search base="dfLatencyOverallProcessingDelayBaseSearch"> <query> | stats values(serviceName) as serviceName count(eval(error=="failure")) as failureCount count(eval(error=="warning")) as warningCount</query> </search> </table> </panel> </row>   However when I am using visibility:hidden; display:none;  Alignment is bad, Attach is the screenshot
Hello, I have verified that sourcetype=aws:config is being ingested from AWS according to https://docs.splunk.com/Documentation/AWS/6.0.3/User/Topology. Still, nothing shows up under the Topology ta... See more...
Hello, I have verified that sourcetype=aws:config is being ingested from AWS according to https://docs.splunk.com/Documentation/AWS/6.0.3/User/Topology. Still, nothing shows up under the Topology tab. The troubleshooting documentation references this:  check that the Config: Topology Data Generator saved search is enabled I've looked for this saved search and it doesn't exist. I found other references on the Community saying that the Topology Data Generator search isn't found. Is that an error on Splunk's part for putting that in the documentation or am I missing something? Any help is greatly appreciated!   V/r, mello920
I am newbie to splunk and need to configure Palo Alto splunk and looking for corrective action. What I did so far 1. Rsyslog allowed port 1514. --> sudo semanage port -a -t syslogd_port_t -p tcp 1... See more...
I am newbie to splunk and need to configure Palo Alto splunk and looking for corrective action. What I did so far 1. Rsyslog allowed port 1514. --> sudo semanage port -a -t syslogd_port_t -p tcp 1514 2. Ran the Firewall command to allow port 1514. --> sudo firewall-cmd --zone=public --permanent --add-port=1514/tcp 3. From Deployment server in serverclass.conf i created app name and enabled and reloaded deploy server just like other appliance app like barracuda and cisco.       3. Install the Splunk_TA_paloalto on heavy forwarder. UI interface configuration is empty as I dont see any information that can be added. after reading few blog came out with below input.conf stanza Ver 1 and Ver 2     ------Outcome----- Logs are being ingested using cisco index because cisco is monitoring the file path /*.log where i have provided the suitable stanza version 1 (not sure if it is working fine, Please note log folder names are in capital (/var/log/remote/ABC-FW01-DOMAIN.COM/1,2022LOG") Logs are going to cloud from the remote folder, but not through palo alto app and so cloud base PA app wont be able to read it out.   please guide for correction..
Hello - Thank you in advance for the help. I am getting following raw data in Splunk events which I'd like to pull into a table format. I would like to pull the following: Host, Success, and Error ... See more...
Hello - Thank you in advance for the help. I am getting following raw data in Splunk events which I'd like to pull into a table format. I would like to pull the following: Host, Success, and Error field as columns for my table.   I tried this query but no success: | makeresults | eval _raw="{\"host\"},{\"success\",{\"error\"}" | spath path=host{} output=temp | mvexpand temp | spath input=temp | fillnull value="None" | table host,success,error
Hi, I have a timechart that is currently split into 8-hour shift bins, however as it is a timechart, the x-axis only shows the timestamps, while I would like the bins to be labeled by their shifts. ... See more...
Hi, I have a timechart that is currently split into 8-hour shift bins, however as it is a timechart, the x-axis only shows the timestamps, while I would like the bins to be labeled by their shifts. Sample data: Count | Time | Shift 500 | 0700 (Yesterday) | Shift A 750 | 1500 (Yesterday) | Shift B 500 | 2300 (Yesterday) | Shift C 700 | 0700 (Today) | Shift A 600 | 1500 (Today) | Shift B   One tricky part is that "Shift C" overlaps between 2 dates as well. Any ideas on how to define the time range for the shifts, then split and label the bins by their shifts would be greatly appreciated. Thanks!
Hi everyone.  I am not sure the right place to post this, but I figured an introduction wasn't a bad place to start. I just graduated from a local technical college with degrees in Web Development ... See more...
Hi everyone.  I am not sure the right place to post this, but I figured an introduction wasn't a bad place to start. I just graduated from a local technical college with degrees in Web Development and CyberSecurity.  As a security intern with the college's technology services, I ended up using Splunk quite a bit.  However, I always gravitated more towards development and coding than the typical security work. So my supervisors came up with a development project that could benefit them and let me develop something as my final project. The result was TenaPull, a Java application that processes data from the Nessus API and outputs it into JSON files that can be ingested by a Splunk index. https://github.com/billyJoePiano/TenaPull (It's my understanding that there used to be a Python 2 script which did this, but the script was deprecated and no longer works.  I did briefly examine the script when I started, but didn't dig very deep into it) I am interested in hearing about your experience with it, and any issues or problems you may have encountered using it.  I am definitely open to making changes and improvements if there is a demand for that. Also, if there is a better place to post this information, please let me know as well!  I'd love to see more people using TenaPull
Hello Splunkers,  With most applications, inputs and outputs are handled by their respectively named config files. (inputs.conf and outputs.conf) this brings some advantages, namely for this issue. ... See more...
Hello Splunkers,  With most applications, inputs and outputs are handled by their respectively named config files. (inputs.conf and outputs.conf) this brings some advantages, namely for this issue. _TCP_Routing = <targetgroup>  However, with the GCP Add-On, storage bucket inputs are contained in google_cloud_storage_buckets.conf. I tried adding _TCP_Routing in there but it does not work according to the conf validation.  We have a HF that is sending all data to an on-prem Splunk with some data being routed to a UF for delivery to Splunk Cloud. We have a few inputs on that HF that are able to be routed correctly using _TCP_Routing.  Unfortunately, for the GCP add-on, i cant think of a way to do it.  Any advice? Thanks! 
I've been asked to find historical index volume information,  going back 5 years, to make projections for future infrastructure and license needs. _internal is of no use, because it's cleared after... See more...
I've been asked to find historical index volume information,  going back 5 years, to make projections for future infrastructure and license needs. _internal is of no use, because it's cleared after 30 days. We track disk space, and I can find the disk space info for the Cold bucket on the indexer, but it's set to roll off after 60 days so that's out as well. I understand that anything like that would be slightly lower than the actual, as there are several indexes that whose index info would have rolled off, but I'm just trying to find a rough base line to track program growth. This is pretty far beyond my SPL abilities, so I would be grateful for any help!
Splunk 8.2.5 Enterprise receiver and indexer operating on the same RHEL 7.9 system.  How do I ingest the Linux audit logs from this system into Splunk? Do I need to install a Universal Forwarder like... See more...
Splunk 8.2.5 Enterprise receiver and indexer operating on the same RHEL 7.9 system.  How do I ingest the Linux audit logs from this system into Splunk? Do I need to install a Universal Forwarder like I did on my other/external systems?  I have dashboards created and I'm receiving Linux audit events from my other/external systems but nothing from the Receiver/Indexer system.
Hello all, I'm finding the default indexer.conf settings too small, making various sourcetypes only searchable back about 4 months but I need a years worth/ability to search back to. I've found num... See more...
Hello all, I'm finding the default indexer.conf settings too small, making various sourcetypes only searchable back about 4 months but I need a years worth/ability to search back to. I've found numerous splunk posts on index.conf stanzas and settings, one more confusing than the next. How the indexer stores indexes - Splunk Documentation Configure index storage - Splunk Documentation https://wiki.splunk.com/Deploy:BucketRotationAndRetention I'm afraid I need a "explain to me like I'm 4 years old" post.  What calculator or tool to use, and for what stanzas to effectively: A) get search visibility into logs older than a few months B) no longer roll buckets into Frozen (which seems to be aka 'deleted') but into archived, to facility easily restoring them when A) isn't as dialed in as thought.
Hello experts,we  have recently upgraded to 8.2.3.We did it two phases, first from 7.x to 8.1.0 and then 8.1.0 to 8.2.3. After the upgrade was done to 8.2.3 we are getting some errors.In the health s... See more...
Hello experts,we  have recently upgraded to 8.2.3.We did it two phases, first from 7.x to 8.1.0 and then 8.1.0 to 8.2.3. After the upgrade was done to 8.2.3 we are getting some errors.In the health status bar, we are getting errors related to buckets and IOWait. In the unhealthy instances, the indexers are appearing. Also, during the any search we are getting the below errors for all the indexer peers.   Can anyone suggest how to resolve the issues. Thanks in advance