All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi !   I am stuck for my home lab trying to install phantom on VM . All steps for soar-prep competed fine but then I tried ./soar-install seeing errors like : Error: Cannot run as the root user E... See more...
Hi !   I am stuck for my home lab trying to install phantom on VM . All steps for soar-prep competed fine but then I tried ./soar-install seeing errors like : Error: Cannot run as the root user Error: The install directory (/opt/phantom) is not owned by the installation owner (root) Pre-deploy checks failed with errors   Directory has root access with all folders in it image attched .  {"component": "installation_log", "time": "2024-11-10T02:02:56.071875", "logger": "install.deployments.deployment", "pid": 2005, "level": "ERROR", "file": "/opt/phantom/splunk-soar/install/deployments/deployment.py", "line": 175, "message": "Error: The install directory (/opt/phantom) is not owned by the installation owner (root)", "install_run_uuid": "17e0674c-b035-4696-9f75-acf2297ab325", "start_time": "2024-11-10T02:02:54.547287", "install_mode": "install", "installed_version": null, "proposed_version": "6.3.0.719", "deployment_type": "unpriv", "continue_from": null, "phase": "pre-deploy", "operation_status": "failed", "time_elapsed_since_start": 1.524704} {"component": "installation_log", "time": "2024-11-10T02:02:56.072144", "logger": "install", "pid": 2005, "level": "ERROR", "file": "/opt/phantom/splunk-soar/./soar-install", "line": 105, "message": "Pre-deploy checks failed with errors", "install_run_uuid": "17e0674c-b035-4696-9f75-acf2297ab325", "start_time": "2024-11-10T02:02:54.547287", "install_mode": "install", "installed_version": null, "proposed_version": "6.3.0.719", "deployment_type": "unpriv", "continue_from": null, "time_elapsed_since_start": 1.525168, "pretty_exc_info": ["Traceback (most recent call last):", " File \"/opt/phantom/splunk-soar/./soar-install\", line 82, in main", " deployment.run()", " File \"/opt/phantom/splunk-soar/install/deployments/deployment.py\", line 145, in run", " self.run_pre_deploy()", " File \"/opt/phantom/splunk-soar/usr/python39/lib/python3.9/contextlib.py\", line 79, in inner", " return func(*args, **kwds)", " File \"/opt/phantom/splunk-soar/install/deployments/deployment.py\", line 178, in run_pre_deploy", " raise DeploymentChecksFailed(", "install.install_common.DeploymentChecksFailed: Pre-deploy checks failed with errors"]}  
So I have an Index with working alerts thanks to your guys help. I have a question on 2 separate events at the same time. 1st Event : Invalid password provided for user : xxxxxxxx (this is in the E... See more...
So I have an Index with working alerts thanks to your guys help. I have a question on 2 separate events at the same time. 1st Event : Invalid password provided for user : xxxxxxxx (this is in the Event) 2nd Event :  GET /Project/1234/ HTTP/1.1 401 (this is basically letting me know about the first event but what Project they tried to connect.   How would one write to Get the Username of the invalid password and chlorate that with the project at the same time underneath Example User xxxxxx put in an invalid password for Project 1234. Thinking it is easier to get my team to write it all in 1 event for another release.  
So for our graduation project, we've decided to use splunk SIEM as our base app to build on. However, on further inspection, it turns out that splunk enterprise security has a lot of features that we... See more...
So for our graduation project, we've decided to use splunk SIEM as our base app to build on. However, on further inspection, it turns out that splunk enterprise security has a lot of features that we need. Is there any chance that Splunk would give us the chance to use it without pay?
Hi Guys, Syslog is sent to forwarder IP through TCP 9523 port. I am unable to receive those syslog in forwarder or indexer. How to check whether syslog is received in forwarder ? How to receive th... See more...
Hi Guys, Syslog is sent to forwarder IP through TCP 9523 port. I am unable to receive those syslog in forwarder or indexer. How to check whether syslog is received in forwarder ? How to receive those syslog in indexer? Getting those logs from network device.
Hello ES Splunker,   I want to know if any applications can be installed to enhance the security posture alongside with Enterprise Security. is ITSI App added value for the security posture?  
Dear Sir/Madam We have installed the on-premise version of AppDynamics with various agents in operational environment. We decided to update the controller (not agents). During the controller update,... See more...
Dear Sir/Madam We have installed the on-premise version of AppDynamics with various agents in operational environment. We decided to update the controller (not agents). During the controller update, we encountered with a problem and we had to reinstall the controller. So, the controller access key was changed. It takes much time to coordinate and  update the agents in operational environment and so we have not changed the agents. According to the link 'Change Account Access Key', we changed the new account access key (for Customer1 in single tenant mode) to the old account access key (without changing any config in agent side, including the access keys). Now, every agent is OK (e.g, app agents , db agents, etc.) but database collectors does not work. Although, database agent is registered but we can't add any database collector. I have checked the controller log and found the following exception: "dbmon config ... doesn't exist". It seems that the instructions mentioned in the link above are not enough for database agent and collector, namely some extra steps are needed. Thanks for your attention Best regards.
hello everyone I ran into a problem with Splunk UBA that I need help with. Thank you for guiding me. I have more than one domain in Splunk UBA and it mistakenly recognizes some users as the same use... See more...
hello everyone I ran into a problem with Splunk UBA that I need help with. Thank you for guiding me. I have more than one domain in Splunk UBA and it mistakenly recognizes some users as the same user due to name similarity. While these users are not the same person and only have name similarities in the login ids field. How can I solve this problem and have users with the same login ids but not have false positive anomalies? Thank you for your guidance.
Hi, I am new to Splunk admin. We have a syslog server in our environment to collect logs from our network device. Our clients asked us to install LTM (Local Traffic Manager) load balancer on syslog s... See more...
Hi, I am new to Splunk admin. We have a syslog server in our environment to collect logs from our network device. Our clients asked us to install LTM (Local Traffic Manager) load balancer on syslog server. I have no idea about what load balancer do and how to install it and is it a component of splunk(full package or light weight package). Please suggest how to setup this environment?  And also what is suggested for network logs... UDP or TCP?  I want to learn completely about syslog server and it's end to end configuration with Splunk. Please provide the latest doc link. (I am not asking about add-on). Please note.
I have dashboard in Splunk Cloud which uses a dropdown input to determine the index for all of the searches on the page, with a value like "A-suffix", "B-suffix", etc. However, now I want to add anot... See more...
I have dashboard in Splunk Cloud which uses a dropdown input to determine the index for all of the searches on the page, with a value like "A-suffix", "B-suffix", etc. However, now I want to add another search which uses a different index but has `WHERE "column"="A"`, with A being the same value selected in the dropdown, but without the suffix. I tried using eval to replace the suffix with an empty string, and I tried changing the dropdown to remove the suffix and do `index=$token$."-suffix"` in the other queries, but I can't get anything to work. It seems like I might be able to use `<eval token="token">` if I could edit the XML, but I can only find the JSON source in the web editor and don't know how to edit the XML with Dashboard Studio.    
Our apps send data to the Splunk HEC via HTTP POSTS. The apps are configured to use a connection pool, but after sending data to Splunk (via HTTP POSTS), the Splunk server responds with a Status 200 ... See more...
Our apps send data to the Splunk HEC via HTTP POSTS. The apps are configured to use a connection pool, but after sending data to Splunk (via HTTP POSTS), the Splunk server responds with a Status 200 and the "Connection: Close" header. This instructs our apps to close their connection instead of reusing the connection. How can I stop this behavior? Right now it's constantly re-creating a connection thousands of times instead of just re-using the same connection.
Hello. I'm setting up a new Splunk Enterprise environment - just a single indexer with forwarders. There are two volumes on the server, one is on SSD for hot/warm buckets, and the other volume is H... See more...
Hello. I'm setting up a new Splunk Enterprise environment - just a single indexer with forwarders. There are two volumes on the server, one is on SSD for hot/warm buckets, and the other volume is HDD for cold buckets. I'm trying to configure Splunk such at that an index ("test-index") will only consume, say, 10 MB of the SSD volume. After it hits that threshold, the oldest hot/warm bucket should roll over to the slower HDD volume. I've done various tests, but when the index's 10 MB SSD threshold is reached, all of the buckets are rolled over to the cold storage, leaving SSD empty. Here is how indexes.conf is set now:   [volume:hot_buckets] path = /srv/ssd maxVolumeDataSizeMB = 430000 [volume:cold_buckets] path = /srv/hdd maxVolumeDataSizeMB = 11000000 [test-index] homePath = volume:hot_buckets/test-index/db coldPath = volume:cold_buckets/test-index/colddb thawedPath = /srv/hdd/test-index/thaweddb homePath.maxDataSizeMB = 10   When the 10 MB threshold is reached, why is everything in hot/warm rolling over to cold storage? I had expected 10 MB of data to remain in hot/warm, with only the older buckets rolling over to cold. I've poked around and found a other articles related to maxDataSizeMB, but those questions do not align with what I'm experiencing. Any guidance is appreciated. Thank you!
I have an SPLQ that im trying to collect all domains from a raw logs, but my regex is capturing only one domain. in a single event, some events have one url some of them have 20 and more, how do i c... See more...
I have an SPLQ that im trying to collect all domains from a raw logs, but my regex is capturing only one domain. in a single event, some events have one url some of them have 20 and more, how do i capture all domains, please advice? SPLQ .............. | rex field=_raw "(?<domain>\w+\.\w+)\/" | rex field=MessageURLs "\b(?<domain2>(?:http?://|www\.)(?:[0-9a-z-]+\.)+[a-z]{2,63})/?" | fillnull value=n/a | stats count by domain domain2 MessageURLs _raw
hi , I wanted to search and save result as table from two log statements.  one log statement using regex to extract "ORDERS" and another log statement using regex to extract "ORDERS, UNIQUEID" my... See more...
hi , I wanted to search and save result as table from two log statements.  one log statement using regex to extract "ORDERS" and another log statement using regex to extract "ORDERS, UNIQUEID" my requirement is to use the combine two log statements  on "ORDERS"  and pull the ORDER and UNIQUEID in table  . I am using Join to combine the two log statements on "ORDERS" , but my splunk query not returning any results    
In the Splunk app, the exception message column has multiple line message in it. However, when same query is applied to the table event in the Splunk Dashboard Studio, the newline isn't considered, a... See more...
In the Splunk app, the exception message column has multiple line message in it. However, when same query is applied to the table event in the Splunk Dashboard Studio, the newline isn't considered, and message is read continuously. Below is the Splunk app result. Below is the table shown in the Studio. Below is the Splunk Query.   index="eqt-e2e" | spath suite_build_name | search suite_build_name="PAAS-InstantInk-Stage-Regression-Smooth-Transition" | spath unit_test_name_failed{} output=unit_test_name_failed | mvexpand unit_test_name_failed | spath input=unit_test_name_failed | where message!="Test was skipped" | spath suite_build_number | search suite_build_number="*" | where (if("*"="*", 1=1, like(author, "%*%"))) | where (if("*"="*", 1=1, like(message, "%*%"))) | spath suite_build_start_time | sort - suite_build_start_time | eval suite_build_time = strftime(strptime(suite_build_start_time, "%Y-%m-%d %H:%M:%S"), "%I:%M %p") | table suite_build_name, suite_build_number, suite_build_time, author, test_rail_name, message | rename suite_build_name AS "Pipeline Name", suite_build_number AS "Pipeline No.", suite_build_time AS "Pipline StartTime (UTC)", author AS "Test Author", test_rail_name AS "Test Name", message AS "Exception Message"   @ITWhisperer 
Hello,  From my client I created a service-account in the Google Cloud Platform:  { "type": "service_account", "project_id": "<MY SPLUNK PROJECT>", "private_key_id": "<MY PK ID>", "privat... See more...
Hello,  From my client I created a service-account in the Google Cloud Platform:  { "type": "service_account", "project_id": "<MY SPLUNK PROJECT>", "private_key_id": "<MY PK ID>", "private_key": "-----BEGIN PRIVATE KEY-----\<MY PRIVATE KEY>\n-----END PRIVATE KEY-----\n", "client_email": "<splunk>@<splunk>-<blabla>, "client_id": "<MY CLIENT ID>", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<MY-SPLUNK-URL>" } The service-account was recognized by the Addon so I imagined that the connection was established correctly.  When later I created the first input to collect the logs (a PubSub), when I am searching for the "Projects" connected to this service-account (in the photo) it returns me the error "External handler failed with code '1' and output ''. see splunkd.log for stderr" Actually the stderr in splunkd gives no useful information (just a generic error), so I am blocked at the moment. I also downloaded the code from Google Cloud Platform Add-on but it is not an easy debugging process, I cannot find what is the actual query that the Addon performs when clicking on "Projects".  Someone have some idea on this error?  Thanks
Hi all, We want to configure F5 WAF logs to Splunk. WAF team sending logs to our syslog server. In our syslog server UF is installed and it will forward the data to our indexer.  Please help me wit... See more...
Hi all, We want to configure F5 WAF logs to Splunk. WAF team sending logs to our syslog server. In our syslog server UF is installed and it will forward the data to our indexer.  Please help me with any detailed documentation or steps followed to ingest the data successfully and any troubleshooting if needed? Don't know what syslog is for me... I am very new to Splunk and learning. Apologies if it is basic question. But seriously want to learn.
Hello Splunkers!! I want to extract the _time and match it to the events fields' timestamp while ingesting to Splunk. However, even after applying the props.conf attributes setting, the results st... See more...
Hello Splunkers!! I want to extract the _time and match it to the events fields' timestamp while ingesting to Splunk. However, even after applying the props.conf attributes setting, the results still do not match after ingestion. Please advise me on the proper settings and assist me in fixing this one. Raw events: 2024-11-07 18:45:00.035, ID="51706", IDEVENT="313032807", EVENTTS="2024-11-07 18:29:43.175", INSERTTS="2024-11-07 18:42:05.819", SOURCE="Shuttle.DiagnosticErrorInfoLogList.28722.csv", LOCATIONOFFSET="0", LOGTIME="2024-11-07 18:29:43.175", BLOCK="2", SECTION="A9.18", SIDE="-", LOCATIONREF="10918", ALARMID="20201", RECOVERABLE="False", SHUTTLEID="Shuttle_069", ALARM="20201", LOCATIONDIR="Front" Existing props setting: CHARSET = UTF-8 DATETIME_CONFIG = LINE_BREAKER = [0-9]\-[0-9]+\-[0-9]+\s[0-9]+:[0-9]+:[0-9]+.\d+ NO_BINARY_CHECK = true category = Custom TIME_PREFIX = EVENTTS=" TIME_FORMAT = %Y-%m-%d %H:%M:%S.%N MAX_TIMESTAMP_LOOKAHEAD = 30 TZ = UTC In the below screeshot still we can see _time is not properly extracted with the matching timestamp of the field name "EVENTTS".    
I am trying to simply break down a url to extract the region and chart the use of specific urls over time. but i just get a NULL count of everything. How do i display the counts as separate values? ... See more...
I am trying to simply break down a url to extract the region and chart the use of specific urls over time. but i just get a NULL count of everything. How do i display the counts as separate values?   [query] | eval region=case(url like "%region1%","Region 1",url like "%region2%","Region 2") | timechart span=1h count by region
See SPL-248479 in release notes. If you are using persistent queue and see following errors in splunkd.log.    ERROR TcpInputProc - Encountered Streaming S2S error 1. "Cannot register new_chann... See more...
See SPL-248479 in release notes. If you are using persistent queue and see following errors in splunkd.log.    ERROR TcpInputProc - Encountered Streaming S2S error 1. "Cannot register new_channel" 2. "Invalid payload_size" 3. "Too many bytes_used" 4. "Message rejected. Received unexpected message of size" 5. "not a valid combined field name/value type for data received"   Other S2S streaming errors as well.   You should upgrade your HF/IHF/IUF/IDX instance (if using persistent queue ) to following patches. 9.4.0/9.3.2/9.2.4/9.1.7 and above. This patch also fixes all the known PQ related crashes and other PQ issues. 
This is similar to a question I asked earlier today that was quickly answered, however I'm not sure if I can apply that solution to this due to the transpose.  Not sure how to reference the data corr... See more...
This is similar to a question I asked earlier today that was quickly answered, however I'm not sure if I can apply that solution to this due to the transpose.  Not sure how to reference the data correctly for that. We have data with 10-15 fields in it and we are doing a transpose like the below.  What we are looking to accomplish is to display only the rows where the values are the same, or alternatively where they are different.   index=idx1 source="src1" | table field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 | transpose header_field=field1 column    sys1    sys2 field2       a            b field3       10         10 field4       a           a field5       10         20 field6       c           c field7       20         20 field8       a           d field9      10         10 field10    20        10