All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to install Splunk Phantom on  CentOS server but I'm getting the below error. About to proceed with Phantom install Do you wish to proceed [y/N] y sed: can't read /opt/phantom/bin/s... See more...
I'm trying to install Splunk Phantom on  CentOS server but I'm getting the below error. About to proceed with Phantom install Do you wish to proceed [y/N] y sed: can't read /opt/phantom/bin/stop_phantom.sh: No such file or directory Enter username: vikram@abc.com Enter password: ********** ./phantom_setup.sh: line 357: python: command not found ./phantom_setup.sh: line 358: python: command not found 21 files removed Updating phantom repo package Error updating Phantom Repo package Errors during downloading metadata for repository 'phantom-base': - Status code: 404 for https://***@repo.phantom.us/phantom/4.5/base/8/x86_64/repodata/repomd.xml (IP: 54.165.15.205) Error: Failed to download metadata for repo 'phantom-base': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Hi Splunk Community,  I have a query which has 5eventtypes  index=apple source=Data AccountNo=* eventType=DallasOR  eventType=Houston OR eventType=New York OR  eventType=Boston OR  eventType=S... See more...
Hi Splunk Community,  I have a query which has 5eventtypes  index=apple source=Data AccountNo=* eventType=DallasOR  eventType=Houston OR eventType=New York OR  eventType=Boston OR  eventType=San Jose| table AccountNo eventType _time It has to pass eventType=1 to reach it to next stage i.e, eventType=2 so on. Then only we can assume as it's a successful account Now I wanted to have the query for the unsuccessful accounts meaning..the account does not pass  eventtype=1 but it reached to next stages like eventType=2 or eventType=3 so on. -- Currently I'm using this query but it's not working  index=apple source=Data AccountNo=*  eventType!=1 Please help
Hello, I am attempting to combine 2 reports (1 is a normal stats search return and the other is a pie chart using the data produced from the search of report 1). I have searched and tried numerous d... See more...
Hello, I am attempting to combine 2 reports (1 is a normal stats search return and the other is a pie chart using the data produced from the search of report 1). I have searched and tried numerous different things but none have solved the issue.  Ex: Windows Monthly Data Windows Monthly Data Pie Chart I also have to combine 4 reports of Firewall Logs into 1 report. Ex: Firewall: Building G Firewall: Building F and so on for the remaining 2 firewall log reports. If anyone could offer any advice or suggestions it would be greatly appreciated! Thank You
Hi, I am new to Splunk and inherited the infrastructure. I noticed the bucket creation keeps failing and the hot warm file system on one site is in 70% and on the other site 90% - can anyone help, p... See more...
Hi, I am new to Splunk and inherited the infrastructure. I noticed the bucket creation keeps failing and the hot warm file system on one site is in 70% and on the other site 90% - can anyone help, please? Thank you
Hello,  I am using the Storage Password mechanism to store important keys and passwords used in App. password.conf file generated properly from sdk. While decrypting passwords to use in the app, ... See more...
Hello,  I am using the Storage Password mechanism to store important keys and passwords used in App. password.conf file generated properly from sdk. While decrypting passwords to use in the app, not getting a clear_password by the Splunk Storage Password mechanism, Just getting an error clear_password Piece of code which is not working is given below, using app on 8.0.8 version of Splunk Enterprise:     def get_passwords(app): '''Retrieve the user's API keys from Storage/Passwords''' pwd_dict = {} try: sessionKey = sys.stdin.readline().strip() # list all credentials entities = entity.getEntities(['storage', 'passwords'], namespace=app, owner='nobody', sessionKey=sessionKey) # # return set of credentials for i, c in entities.items(): pwd_dict[c['username']] = c['clear_password'] except Exception as e: #Here got Exception clear_password raise Exception("Could not get %s passwords from storage. Error: %s" % (app, str(e))) return pwd_dict         Any suggestions 
Hello, I'm upgrading a search head from 7.3.0 to 8.2.1. First I upgraded it to 8.1.5 and I didn't experienced any problems. Then I upgraded to 8.2.1 and the knowledge bundle replication to the searc... See more...
Hello, I'm upgrading a search head from 7.3.0 to 8.2.1. First I upgraded it to 8.1.5 and I didn't experienced any problems. Then I upgraded to 8.2.1 and the knowledge bundle replication to the search peers failed with the following errors in the logs. In search head splunkd.log: 08-23-2021 18:48:56.228 +0200 WARN BundleTransaction [2589 BundleReplThreadPoolWorker-1] - Upload bundle="/opt/splunk/current/var/run/sh01-1629737334.bundle" to peer name=idx01 uri=https://10.10.22.14:8089 failed; http_status=409 http_description="Conflict" 08-23-2021 18:48:56.234 +0200 ERROR ClassicBundleReplicationProvider [2589 BundleReplThreadPoolWorker-1] - Unable to upload bundle to peer named idx01 with uri=https://10.10.22.14:8089. In indexers splunkd.log: 08-23-2021 18:48:56.225 +0200 ERROR DistBundleRestHandler - Checksum mismatch: received copy of bundle="/opt/splunk/var/run/searchpeers/sh01-1629737334.bundle" has transferred_checksum=15251024310319607191 instead of checksum=5204570444500435281 -- removing temporary file="/opt/splunk/var/run/searchpeers/sh01-1629737334.bundle.c2ead49153e7b186.tmp". This should be fixed with the next knowledge bundle replication. If it persists, please check your filesystem and network interface for errors. The bundle size is not big, but the size reported in the .info is quite different from the size on the filesystem: [splunk@sh01 run]$ ls -l ... -rw------- 1 splunk splunk 4280079 Aug 23 18:48 sh01-1629737334.bundle -rw------- 1 splunk splunk 42 Aug 23 18:48 sh01-1629737334.bundle.info [splunk@sh01 run]$ cat sh01-1629737334.bundle.info checksum,size 5204570444500435281,6574080 The indexers are in a cluster and all nodes are running version 7.3.0. I know Splunk recommends the manager node to be higher or equal version but I'm validating some custom apps on a test search head, which I wanted to do in version 8.2. In another not production environment a search head on 8.2 works (no bundle replication problems) with indexers 7.3.0.
I have a simple TA that makes a request to a REST endpoint and writes the data to the index (no UI associated with this, only indexing).  Im exploring a distributed Splunk environment (with a forward... See more...
I have a simple TA that makes a request to a REST endpoint and writes the data to the index (no UI associated with this, only indexing).  Im exploring a distributed Splunk environment (with a forwarder, indexer and search head) but im unsure where to install the TA. On the forwarder, the indexer or somewhere else? Reading similar forum posts it appears that the answer can depend on the TA, however what about a TA decides in what environment it should be installed?
I have my paging polices set to send a push notification to all of my devices, but I am only getting the audio alert through my bluetooth. I have the current splunk app v7.52 and an android galaxy no... See more...
I have my paging polices set to send a push notification to all of my devices, but I am only getting the audio alert through my bluetooth. I have the current splunk app v7.52 and an android galaxy note20
I need a Splunk ID for taking a Splunk Certification exam on PearsonVUE. How do I get the 6-digit ID?  
The latest version of the linux x64 php-agent (21.7.0.4560) is packaged with some out of date components: netty (4.1.38). Currently this has some CVEs logged against it: CVE-2019-20445 CVE-2019-20... See more...
The latest version of the linux x64 php-agent (21.7.0.4560) is packaged with some out of date components: netty (4.1.38). Currently this has some CVEs logged against it: CVE-2019-20445 CVE-2019-20444 under the path: /proxy/lib/tp/grpc-netty-shaded-1.24.0.jar Anyone know if this is something that can be patched, or if there is an intention to include a more up-to-date version in a future build?
If we want to use the Splunk as Central log monitoring tools how can we monitor the COTS application logs in Splunk?
Usually splunk seems to interpret hypens for event viewer as folders.  I have this input but its not working. [WinEventLog://Microsoft-ServerManagementExperience disabled = 0 index = wineventlog He... See more...
Usually splunk seems to interpret hypens for event viewer as folders.  I have this input but its not working. [WinEventLog://Microsoft-ServerManagementExperience disabled = 0 index = wineventlog Here is a screenshot of the folder i'd like to monitor with Splunk.
I am not sure if anyone else has encountered this, but in our distributed environment that was just upgraded from 8.0.3 to 8.2.2, we have noticed issues with the health report manager.  The new IOWai... See more...
I am not sure if anyone else has encountered this, but in our distributed environment that was just upgraded from 8.0.3 to 8.2.2, we have noticed issues with the health report manager.  The new IOWait feature in the health report is extremely "chatty" even though all other aspects of the deployment are in great shape.  Even though we can successfully disable the IOWait feature in the console and via a local health.conf file, the feature is still being included in the health report.  I've opened up a case with Splunk support, but was just wondering if anyone else has encountered this behavior.
Hello, I have some issues to create PROPS Conf file for following sample data events. It's a text file with header in it. I created one, but not working. Thank you so much, any help will be highly ap... See more...
Hello, I have some issues to create PROPS Conf file for following sample data events. It's a text file with header in it. I created one, but not working. Thank you so much, any help will be highly appreciated. I am giving the events below ......UserID and Timestamp values are marked in bold below UserId, UserType, System, EventType, EventId, STF, SessionId, SourceAddress, RCode, ErrorMsg, Timestamp, Dataload, Period, WFftCode, ReturnType, DataType 2021-08-19 08:05:52,763-CDT - FETCE,SRGEE,SAATCA,FETCHFA,FI,000000000,E3CE4819360E57124D220634E0D,saatca,00,Successful,20210819130552,UCJ3R8,,,1,0 2021-08-19 08:06:53,564-CDT - FETCE,SRGEE,SAATCA,FA,FETCHFI,000000000,E3CE4819360E57124D220634E0D,saatca,00,Successful,20210819130653,UCJ3R8,,,1,0   What I wrote my PROPS Conf file [ __auto__learned__ ] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) INDEXED_EXTRACTIONS=psv TIME_FORMAT=%Y-%m-%d %H:%M:%S .%3N TIMESTAMP_FIELDS=TIimestamp
As i mentioned below prod column has multiple values and i want to split it based on \n next line command and get the output as mentioned in output image. Current data: Expected output: Th... See more...
As i mentioned below prod column has multiple values and i want to split it based on \n next line command and get the output as mentioned in output image. Current data: Expected output: Thanks in advance ..
Hey Everyone! I'm in need of some help, advice, Ouija board (lol)...whatever can do the trick. I am wanting to know if it is possible to consolidate data from a search that is not generated on Splun... See more...
Hey Everyone! I'm in need of some help, advice, Ouija board (lol)...whatever can do the trick. I am wanting to know if it is possible to consolidate data from a search that is not generated on Splunk? My supervisor is wanting to receive 1 report instead of 2. Do any of you know if this is even possible?  Thanks, Cyber_Nerd3
Is it possible to configure a 6.5.2 universal forwarder to send events to an http event collector (on 7.2)? I have a series of universal forwarders that had been sending logs to an old indexer on po... See more...
Is it possible to configure a 6.5.2 universal forwarder to send events to an http event collector (on 7.2)? I have a series of universal forwarders that had been sending logs to an old indexer on port 9997 -- both the forwarders and the indexer are slated for retirement "soon", part of an app that's been mostly retired already, but I need to keep them going a few more months.   The indexer hardware died badly, and I thought I'd easily be able to switch these UFs over to our current indexer, which runs 7.2 (upgrading soon to 8.x), but that indexer only listens using the HEC on port 8088.  It's behind an AWS ALB, so opening up port 9997 would be problematic. Is this even *supposed* to be possible (sending events from 6.5.2 UF to 7.2 HEC)? I've tried putting the following into local/outputs.conf, but it seems to have no impact.  Splunk isn't complaining about the statements when it starts up, and it also isn't sending any network traffic on port 8088.     [httpout] httpEventCollectorToken = [642bc63f-8e62-4b3e-9579-f146345eeaa2] uri = http://splunk.domain-name.com:8088 batchSize = 65536 batchTimeout = 5    
Hey, actually, I am facing an issue, forwarding data via Tcpout. My scope is to forwarding some data to the main indexer and a subset of the data with specific props.conf to another but additionall... See more...
Hey, actually, I am facing an issue, forwarding data via Tcpout. My scope is to forwarding some data to the main indexer and a subset of the data with specific props.conf to another but additionally keep the subset within the main indexer without using these additional props.conf setting.   Problem: Data is actual sent to both with using props.conf for both tcpout. sourcetype A  + sourcetype XXX ---> also using Props Props/Transforms (should be ignored) ---> Main Indexer sourcetype A ----> using Props/Transforms (required) --> Secondary Indexer   Scope: sourcetype A  + sourcetype XXX  ---> also using Props Props/Transforms ---> Main Indexer sourcetype A ----> Some Props/Transforms --> Secondary Indexer   Is there any solution to fix the problem?   Thank you for helping. Regards, Christoph
Hi, In my query: index="my_local" | sort -Date I get a list of items, and if I look at one item (and lick "show as raw text") it looks like this: {"Level":"Info","MessageTemplate":"ApiReque... See more...
Hi, In my query: index="my_local" | sort -Date I get a list of items, and if I look at one item (and lick "show as raw text") it looks like this: {"Level":"Info","MessageTemplate":"ApiRequest","RenderedMessage":"ApiRequest","Properties":{"httpMethod":"GET","statusCode":200}, ...} Since a lot of the properties are wrapped inside "Properties", I always have to expand it manually by clicking the expand icon (with plus sign). Is there any way to get the search results already expanded (so I don't always have to click "Properties" to manually expand it)? Many thanks!
Hi, I have below data in lookup,i need to add up the row data example: For first row i need to add total offw,total 'B',total 'V' and show the count in 3 different column for OFF,B and V. Similary... See more...
Hi, I have below data in lookup,i need to add up the row data example: For first row i need to add total offw,total 'B',total 'V' and show the count in 3 different column for OFF,B and V. Similary for each row i need add the same data value and show in a column   any query or commands ?