All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

msiexec.exe /qn /I splunkforwarder-9.0.2-17e00c557dc1-x64-release.msi DEPLOYMENT_SERVER="10.0.0.7:8089" SPLUNKUSERNAME=Admin SPLUNKPASSWORD=S@M3!! AGREETOLICENSE=Yes  LAUNCHSPLUNK=0 This appears ... See more...
msiexec.exe /qn /I splunkforwarder-9.0.2-17e00c557dc1-x64-release.msi DEPLOYMENT_SERVER="10.0.0.7:8089" SPLUNKUSERNAME=Admin SPLUNKPASSWORD=S@M3!! AGREETOLICENSE=Yes  LAUNCHSPLUNK=0 This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a .deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y -- Migration information is being logged to 'c:\program files\splunkuniversalforwarder\var\log\splunk\migration.log.2022-12-31.15-42-09' -- Migrating to: VERSION=9.0.2 BUILD=17e00c557dc1 PRODUCT=splunk PLATFORM=Windows-AMD64   It seems that the Splunk default certificates are being used. If certificate validation is turned on using the default certificates (not-recommended), this may result in loss of communication in mixed-version Splunk environments after upgrade. "c:\program files\splunkuniversalforwarder\etc\auth\ca.pem": already a renewed Splunk certificate: skipping renewal "c:\program files\splunkuniversalforwarder\etc\auth\cacert.pem": already a renewed Splunk certificate: skipping renewal Failed to start mongod. Did not get EOF from mongod after 5 second(s). [App Key Value Store migration] Starting migrate-kvstore. Created version file path=c:\program files\splunkuniversalforwarder\var\run\splunk\kvstore_upgrade\versionFile36 [App Key Value Store migration] Collection data is not available. ERROR - Failed opening "c:\program files\splunkuniversalforwarder\va      
on version 9.0.0 when you go to Manage Apps in the web UI and then click on the Name column arrow to sort your apps by name it just doesn't sort them correctly, have any of you noticed that? 
Though not an emergency yet, I am hoping to make a decision on one of the two following options soon: 1. Double down on the current strategy of adding indexers in a cluster behind a load balancer a... See more...
Though not an emergency yet, I am hoping to make a decision on one of the two following options soon: 1. Double down on the current strategy of adding indexers in a cluster behind a load balancer and assigning an external port for each additional indexer 2. Define and pursue an alternative which would allow for adding indexers in our indexer cluster and UFs without having to resolve connectivity challenges associated with not having all futuer ports allowed across the entire enterprise. First I'll describe our situation for context and then I'll ask the question: The situation at our large client is that there are 10s of 1000s of Universal Forwarders across the enterprise and not all parts of the networks(s) allow connectivity to our port range. For the sake of conversation let's say the port range is  10000-10019 on 2 IP addresses: 1 for a test environment and 1 for a prod environment. Prod is the main concern here as we will not be adding indexers to the test environment.  Though we don't have 20 indexers yet, that would be a reasonable upper limit for currently anticipated scope. For the sake of the question, let's say we have 8 indexers. Each port externally maps like this: Prod.address:10000 - idx01:9996 prod.address:10001 - idx02:9996  etc... for a total of 8 in production.   However, earlier in the deployment there were only 4 indexers. Perhaps not always were firewall requests put in to consistently open all 20 ports instead of only the 4 which were online at that time.  Firewall teams like to be able to test to verify connectivity rahter than to allow future needed connectivity and thereby save themselves the trouble of the imminent 70,000+ firewall requests which will be needed to open up 20 ports across as many hosts... (And perhaps it would be better to simply hvae this allowed across the enterprise as part of my Option #1 above) My understanding is that Option 2 is not an option, because any strategy for presenting only the following would preclude the UFs being able to have their special conversation with each indexer which is a part of Splunk's own particular way to balance load.  e.g.: prod.ip:9996 :  round-robin TCP (or whatever makes sense): idx1:9666, idx2:9996, idx3, 9996.... Short of redoing absolutely everything and moving to Heavy Forwarders behind a load balancer, I believe there is not another way of doing this. The biggest impact of moving to Heavy Forwarders would be having to re-onboard 1000s of custom applications in addition to planning cutover from one of collecting logs to the other in waves of applications. So my question is, are there any alternatives for load balancing across multiple indexers which would allow us to use only one of our existing ports?   Thanks!  
I would like to display multiple values in multiple pie charts. For example: I want to display (Consumption & Remaining_Utilization) for each PowerStation in trellis mode Using the following (pa... See more...
I would like to display multiple values in multiple pie charts. For example: I want to display (Consumption & Remaining_Utilization) for each PowerStation in trellis mode Using the following (partial) query index="mtx" source="*Dual_Station*" MTX="*" PowerStation="*" | eval Remaining_Utilization = Capacity - Consumption | stats values(Remaining_Utilization) as Remaining_Utilization , values(Consumption) as Consumption by PowerStation (What is missing after (stats)? I checked each an every thread related to Pie charts in trellis mode in the community and couldn't find any answer   Hint: I use the following query to draw a single pie chart index="mtx" source="*Dual_Station*" MTX="MTX_Name" PowerStation="PowerStation_Name" | eval Remaining_Utilization = Capacity - Consumption | chart values(Consumption) as Consumption over Remaining_Utilization | transpose    
I'm having an issue with one of my monitored paths.  Here's the monitor stanza, the blacklist line should only blacklist one file in a directory of about 420 log files: [monitor:///logs/reg*/last/.... See more...
I'm having an issue with one of my monitored paths.  Here's the monitor stanza, the blacklist line should only blacklist one file in a directory of about 420 log files: [monitor:///logs/reg*/last/...] sourcetype = xxxx:Regional blacklist = xxxx_\d{4}-\d{2}-\d{2}\.log index = xxxx disabled = false crcSalt = <SOURCE> The output of splunk list monitor shows me all the files I expect to see based on the above stanza.  Splunkd.log shows no problems reading any of them.  My problem is that when I search splunk, I'm missing all data from roughly 100 of the files, files that list monitor shows that I'm watching.   I recently added the crcSalt=<SOURCE> line thinking that would help, it has not.  Am I missing something obvious?
New customer seeking guidance for creating indexes/sourcetypes and determining granularity.  Primarily we're looking for deeper guidance on why more so than what.  We have a large, complex environmen... See more...
New customer seeking guidance for creating indexes/sourcetypes and determining granularity.  Primarily we're looking for deeper guidance on why more so than what.  We have a large, complex environment. Our naming scheme for indexes thus far is: organization_category_purpose (ex acme_net_fw) organization - unique to us, required, primarily used to segment data between organizations. category - broad, like network, application, endpoint, etc purpose - more specific, largely unique per category Does the following seem best practice, for firewalls? 2 or 3 indexes used by firewalls (traffic, operations, maybe threats?) Multiple sourcetypes split into the various indexes We are looking at SC4S as a guide (https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/PaloaltoNetworks/panos/) although their examples are not always consistent. We are struggling to determine how granular to be with the purpose of the index and with the amount of possible sourcetypes we can/will have.  We do not have the need to specify sensitivity or retention time.  Furthermore, we do not have the need to separate security/infrastructure teams. This slide from a Splunk presentation suggests that many sourcetypes get their own index for efficiency: Questions With 4-5 separate firewall products in use in one organization (the most complex), we're looking at 20-25 unique sourcetypes distributed into around 3 firewall indexes, just for firewalls.  Does this sound correct? We want to avoid unnecessary complexity for future searches, documentation, etc while not destroying our efficiency. Can anyone speak into their experiences with creating too many/too few indexes?  Specifically on long-term organization, search efficiency, overall experience? Can anyone offer any additional real-world guidance on creating a data catalog? We can't see any reason to split up windows event logs for endpoints (security/application, etc) but could see security being separate from the others for DCs.  Does that sound correct? Any resources or guidance appreciated.  Here's what we're using so far: SC4S example structure: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/PaloaltoNetworks/panos/ https://lantern.splunk.com/Splunk_Success_Framework/Data_Management/Naming_conventions https://subscription.packtpub.com/book/data/9781789531091/5/ch05lvl1sec32/best-practices-for-administering-splunk https://kinneygroup.com/blog/the-proverbial-8-ball-splunk-implementation/
I have run across an edge case dealing with some f5 data.  Some times a nodes down can be reported one or more times before the nodes up occurs.  Currently setting up a transaction on pool and member... See more...
I have run across an edge case dealing with some f5 data.  Some times a nodes down can be reported one or more times before the nodes up occurs.  Currently setting up a transaction on pool and member name which should be unique, I end up with orphans records which aren't really orphans.  Is there some way to only have one transaction open per unique fields and skip the next match closing the transaction when it finds the endswith?  I know I could set keeporphans=false, but that would negate the whole purpose of this report which is to determine if a node is down. Here is what I am trying to do.   | makeresults | eval _raw = "Nov 19 2022 00:24:37 mcpd[9745]: 01070638:5: Pool /den-dmz/ShapePool member /den-dmz/Shape-Prod-East:443 monitor status down. [ /den-dmz/ShapeMonitor: down; last error: /den-dmz/ShapeMonitor: Response Code: 307 (Moved Temporarily) @2022/11/19 00:24:37. ] [ was up for 0hr:0min:36sec ]" | eval _time=strptime("1668835477","%s" ) | append [| makeresults | eval _raw="Nov 19 2022 00:25:22 mcpd[9745]: 01070727:5: Pool /den-dmz/ShapePool member /den-dmz/Shape-Prod-East:443 monitor status up. [ /den-dmz/ShapeMonitor: up ] [ was down for 0hr:0min:5sec ]" |eval _time=strptime("1668835522","%s" ) ] | append [| makeresults | eval _raw="Nov 19 2022 00:25:17 mcpd[9745]: 01070638:5: Pool /den-dmz/ShapePool member /den-dmz/Shape-Prod-East:443 monitor status down. [ /den-dmz/ShapeMonitor: down; last error: /den-dmz/ShapeMonitor: Response Code: 307 (Moved Temporarily) @2022/11/19 00:25:17. ] [ was up for 0hr:0min:31sec ]" |eval _time=strptime("1668835517" , "%s" ) ] | append [| makeresults | eval _raw="Nov 19 2022 00:25:17 mcpd[9745]: 01070638:5: Pool /den-dmz/ShapePool member /den-dmz/Shape-Prod-OTHER:443 monitor status down. [ /den-dmz/ShapeMonitor: down; last error: /den-dmz/ShapeMonitor: Response Code: 307 (Moved Temporarily) @2022/11/19 00:25:17. ] [ was up for 0hr:0min:31sec ]" |eval _time=strptime("1668835600" , "%s" ) ] `comment("Find Pool Name")` | rex field=_raw "(: Pool | ltm pool )(?<pool>.*?)( member| {)" `comment("Determine which member of Pool")` | rex field=_raw "(member |members delete { )(?<member>.*?)( monitor status| })" `comment("Determine Actually Status")` | rex field=_raw "monitor status (?<status>.*?)\." `comment("deal with up down time")` | eval timedown=if(status=="down", _time, null()) | eval timeup=if(status=="up", _time, null()) | fieldformat timedown=strftime(timedown,"%F %T") | fieldformat timeup=strftime(timeup,"%F %T") | sort 0 _time desc | transaction pool, member startswith=eval(status=="down") endswith=eval(status=="up") keeporphans=true | eval down_duration=if(isnull(timeup),now() - timedown, timeup - timedown) | fieldformat down_duration=tostring(down_duration,"duration") | table _time, pool, member, timedown, timeup, down_duration  
Hi at all, a question before starting a new configuration. I configured custom fields on some Universal Forwarders using _meta in inputs.conf and it correctly runs. In my architecture there's an i... See more...
Hi at all, a question before starting a new configuration. I configured custom fields on some Universal Forwarders using _meta in inputs.conf and it correctly runs. In my architecture there's an intermediate Forwarder that collects different _meta from Universal Forwarders and it correctly runs. If now I try to add _meta to some inputs on the Heavy Forwarder itself: in your opinion (and/or experience), must I configure _meta in each input stanza of the Heavy Forwarder or can I configure the [default] stanza without overriding values from the other Universal Forwarders? Thank you for your attention. Ciao. Giuseppe
Hello Splunkers!! I have below code for my Dashboard. In this code I have an issue with two panels. The two I have mentioned below. In "<title>Vulnerable panel" i have used drilldown and use that dr... See more...
Hello Splunkers!! I have below code for my Dashboard. In this code I have an issue with two panels. The two I have mentioned below. In "<title>Vulnerable panel" i have used drilldown and use that drill down in "<title>Vulnerabilities</title>" panel. But the drilldown is not working . My expectations once click any value from the above panel the below panel values will populate. Please suggest me some ideas on the same. Panel :<title>Vulnerable Hosts : Selected host is "$hostname$"</title>   Panel : <title>Vulnerabilities</title> Dashboard link:  https://drive.google.com/file/d/1UCguHdcAfIcz2QXOUJvvULxGE-YSDuVj/view?usp=drivesdk
Hello all, I have the problem that I can read the data only from Error: of the line to the first character { The error can always be different Example of my log file: 2022/12/30 13:09:38.584 ERRO... See more...
Hello all, I have the problem that I can read the data only from Error: of the line to the first character { The error can always be different Example of my log file: 2022/12/30 13:09:38.584 ERROR: Failed to manipulate address {F1909AddressManipulation.run[179]} Thread-5618073 ... 36 lines omitted ... at glf1900.glf1909.core.validation.F1909AddressManipulation.run(F1909AddressManipulation.java:103) [GLF1909-V235_27_0003.jar:?] at glf1900.glf1909.core.F1909ValidateShipment.run(F1909ValidateShipment.java:561) [GLF1909-V235_27_0003.jar:?]
Hi, i got this query | tstats summariesonly=t allow_old_summaries=t dc(All_Traffic.dest_port) as num_dest_port dc(All_Traffic.dest_ip) as num_dest_ip from datamodel=Network_Traffic by All_Traffic.s... See more...
Hi, i got this query | tstats summariesonly=t allow_old_summaries=t dc(All_Traffic.dest_port) as num_dest_port dc(All_Traffic.dest_ip) as num_dest_ip from datamodel=Network_Traffic by All_Traffic.src_ip, All_Traffic.action | rename "All_Traffic.*" as "*" | where num_dest_port > 100 OR num_dest_ip > 100 AND num_dest_port > num_dest_ip | eval desv=num_dest_port / num_dest_ip | where desv > 1 AND action!="blocked"   And the result is: My solution is half way done because it deletes most of them but not the ones that the src_ip is duplicate and the desv is diferent from one another (I don't know why). What I need is that all duplicates with the same ip are deleted. This is what I have -->  dedup src_ip | where desv > 1 AND action!="blocked" AND action!="teardown" Thanks!
  Hello !! I want to read index=test line by line and then analyze log by  log_dict and parser_log  function.. is it possible??  I am very desperate to solve this problem. please help me..ㅠ.ㅠ   ... See more...
  Hello !! I want to read index=test line by line and then analyze log by  log_dict and parser_log  function.. is it possible??  I am very desperate to solve this problem. please help me..ㅠ.ㅠ       @Configuration() class GenerateTESTCommand(GeneratingCommand): event_log = read event_log(index) def generate(self): log = self.log_dict(self.event_log) if log: try: result = self.parse_log(log) yield result except BaseException as ex: print(log, ex)        
Here is an example of SPL I am trying to run. | makeresults | eval ProxyUser="User1,User2,User3" | makemv delim="," ProxyUser | mvexpand ProxyUser | map maxsearches=0 search="search index=edrlogs* ... See more...
Here is an example of SPL I am trying to run. | makeresults | eval ProxyUser="User1,User2,User3" | makemv delim="," ProxyUser | mvexpand ProxyUser | map maxsearches=0 search="search index=edrlogs* SubjectName=*$ProxyUser$ earliest=-24h | eval ProxyUser1=$ProxyUser$" | fillnull value="N/A" | table _time SubjectName EndpointName IPAddress ProxyUser1 I am getting results, however the ProxyUser1 field is empty. The initial searched value of ProxyUser has been  eval'd to a new field named ProxyUser1, within the map command. I have read some other posts where the eval command after the map search should do the trick, but I believe I am doing something wrong here Any leads would be much appreciated
Hi all, I would like to display panels only if they are selected by multi-select dropdown. So I use <condition match=....> to compare the input string. Becuase users may enter more than one keyw... See more...
Hi all, I would like to display panels only if they are selected by multi-select dropdown. So I use <condition match=....> to compare the input string. Becuase users may enter more than one keywords for panels, I want to make the condition to allow multiple input values. ex. if users select Panel A and Panel B, then both panels should be shown, and panel C is hidden. But my code doesn't work as I expected. No matter I choose Panel A or Panel B from dropdown, none of the panels are displayed. Could anyone help to check which part of codes I wrote is wrong? <choice value="panelA">Panel A</choice> <choice value="panelB">Panel B</choice> <choice value="panelC">Panel C</choice> <change> <condition match="match('value',&quot;panelA&quot;)"> <set token="tokenA">true</set> <unset token="tokenB" /> <unset token="tokenC" /> <set token="tokenA_B">true</set> </condition> <condition match="match('value',&quot;panelB&quot;)"> <set token="tokenB">true</set> <unset token="tokenA" /> <unset token="tokenB" /> <set token="tokenA_B">true</set> </condition> <condition match="match('value',&quot;panelC&quot;)"> <set token="tokenC">true</set> <unset token="tokenA" /> <unset token="tokenB" /> <unset token="tokenA_B" /> </condition> ... <row> <panel depends="$tokenA$, $tokenA_B$,"> <title>Table 1</title> ...... Thank you.
I am trying to make a custom function for Cybereason, however as I am not so familiar with Python I was wondering if there is a way to pull the credentials from the existing app so that I do not have... See more...
I am trying to make a custom function for Cybereason, however as I am not so familiar with Python I was wondering if there is a way to pull the credentials from the existing app so that I do not have to type in the username and password as clear text in my custom function
Hi all, In Splunk Add-on for Microsoft Office 365 (4.2.1) on Splunk Enterprise 9, we got problem when configuring it on a Splunk instance with IPv6 enabled: 2022-12-29 16:04:38,155 level=ERROR pid=... See more...
Hi all, In Splunk Add-on for Microsoft Office 365 (4.2.1) on Splunk Enterprise 9, we got problem when configuring it on a Splunk instance with IPv6 enabled: 2022-12-29 16:04:38,155 level=ERROR pid=2704844 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:72 | datainput=b'Staff_Management_Activity_AzureAD' start_time=1672301078 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/lib/requests/models.py", line 434, in prepare_url scheme, auth, host, port, path, query, fragment = parse_url(url) File "/opt/splunk/etc/apps/splunk_ta_o365/lib/urllib3/util/url.py", line 397, in parse_url return six.raise_from(LocationParseError(source_url), None) File "<string>", line 3, in raise_from urllib3.exceptions.LocationParseError: Failed to parse: https://::1:8089/servicesNS/nobody/splunk_ta_o365/configs/conf-inputs/splunk_ta_o365_management_activity%3A%2F%2FStaff_Management_Activity_AzureAD Seems the localhost resolves to ::1 instead of [::1].    There is no problem if the host doesn't have IPv6 enabled. Would anyone please help? Thanks and Regards
Hi Recently we have installed the ABLR on our prodcution system.  We were told that we need to bounce the database in order for the agent to start the monitoring.    Is this a requirement. This is... See more...
Hi Recently we have installed the ABLR on our prodcution system.  We were told that we need to bounce the database in order for the agent to start the monitoring.    Is this a requirement. This is a Oracle 19c database.    Thanks & Regards   
I have reset the admin password on many Splunk instances but this one is hung up for some reason, please see the screen shot the Set password box never goes away and this is preventing me from adding... See more...
I have reset the admin password on many Splunk instances but this one is hung up for some reason, please see the screen shot the Set password box never goes away and this is preventing me from adding this instance to the Distributed Search list on another machine Encountered the following error while trying to save: Status 401 while sending public key to search peer https://legspkds01:8089: Remote login has been disabled for 'admin' with the default password. Either set the password, or override by changing the 'allowRemoteLogin' setting in your server.conf file.
Hi,  I am new in splunk and I'm trying to figure out how it works, I download the splunk-sdk-java project from git and I have a free trial as well. I am trying to do the logging test in order to see... See more...
Hi,  I am new in splunk and I'm trying to figure out how it works, I download the splunk-sdk-java project from git and I have a free trial as well. I am trying to do the logging test in order to see if I am able to connect but I am getting this error:  Exception in thread "main" java.lang.NoClassDefFoundError: com/splunk/ServiceArgs at login.main(login.java:23) Caused by: java.lang.ClassNotFoundException: com.splunk.ServiceArgs at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:606) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:168) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522) Someone can guide me to solve this, please?
Hi  Happy Holidays to everyone. Am trying to get user  report. The system is Linux. The report must or should have the following included in the .png file.  Can Splunk do all that.? I know its cap... See more...
Hi  Happy Holidays to everyone. Am trying to get user  report. The system is Linux. The report must or should have the following included in the .png file.  Can Splunk do all that.? I know its capable of some. But that seems excessive.  Is there an SPL query that can achieve this?   Thanks