All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello fellow Splunthusiasts! I have some applications running on classic VMs, I am happily splunking their logs and everything works fine. Recently we started to deploy the same applications to D... See more...
Hello fellow Splunthusiasts! I have some applications running on classic VMs, I am happily splunking their logs and everything works fine. Recently we started to deploy the same applications to Docker containers. To collect logs, I use Docker's native Splunk logging driver and receive the data through HEC. The logging driver adds its own stuff to app's log (either prepends a prefix or it wraps the app's log into JSON, the extra information identifies the container instance). Due to this, some of my field extractors stopped working, as the format of the data actually ingested has changed. What are the best practices for writing extractors universally, so one configuration works with all ways of collecting logs? Just a side note: the point is to have extractors in props.conf in an app distributed from DS, therefore my question is about what should be addressed in regular expression itself. Using | rex field=xxx is not an option here.
Hi, I have the system logs being dumped in the sFTP server and would like to access them and move to local folders in Splunk server. Can you please share the script i can use for this?  
Hi Community, We have 1000 servers irrespective of operating systems so we are planning to install OTEL collector across all the servers. In my environment there is no Ansible module and Puppet and... See more...
Hi Community, We have 1000 servers irrespective of operating systems so we are planning to install OTEL collector across all the servers. In my environment there is no Ansible module and Puppet and also do not have access to internet to use the CURL t so please suggest what could be the better way to install the OTEL in all servers.  Thanks in advance!
Hi and Good day! I wrote a PSI script to filter Windows Services I need to monitor in our Splunk OBS dashboard. However,  Splunk detects each Windows service as one value. For example,  the script... See more...
Hi and Good day! I wrote a PSI script to filter Windows Services I need to monitor in our Splunk OBS dashboard. However,  Splunk detects each Windows service as one value. For example,  the script gathers 10 windows services. 5 of them are Running and 5 has stopped. In my dashboard I want to show all the Windows services. If the service is running then it should be green and if it stopped then it should be red. Right now, everything is showing the same Color, the same value for both running  and stopped. Is there a way to apply this condition in the dashboard? Thank you in advance and looking forward to your response. Have a great day! Ann
Hi Team,    We are noticing suddenly started noticing these errors in splunkd log. Any idea what could cause these ? We didnt make any changes to the splunkforwarder app.    06-01-2023 20:40:... See more...
Hi Team,    We are noticing suddenly started noticing these errors in splunkd log. Any idea what could cause these ? We didnt make any changes to the splunkforwarder app.    06-01-2023 20:40:19.027 -0700 ERROR AwsSDK [4839 ExecProcessor] - CurlHttpClient Curl returned error code 28 - Timeout was reached 06-01-2023 20:40:19.027 -0700 ERROR AwsSDK [4839 ExecProcessor] - EC2MetadataClient Http request to retrieve credentials failed 06-01-2023 20:40:20.029 -0700 ERROR AwsSDK [4839 ExecProcessor] - CurlHttpClient Curl returned error code 28 - Timeout was reached 06-01-2023 20:40:20.029 -0700 ERROR AwsSDK [4839 ExecProcessor] - EC2MetadataClient Http request to retrieve credentials failed 06-01-2023 20:40:20.029 -0700 ERROR AwsSDK [4839 ExecProcessor] - EC2MetadataClient Can not retrive resource from http://169.254.169.254/latest/meta-data/placement/availability-zone
Any help is appreciated   OK, I installed splunk on a docker instance,    docker run -d --name Splunk --restart unless-stopped -v /var/run/docker.sock:/var/run/docker.sock -p 8000:8000 -p 8089:80... See more...
Any help is appreciated   OK, I installed splunk on a docker instance,    docker run -d --name Splunk --restart unless-stopped -v /var/run/docker.sock:/var/run/docker.sock -p 8000:8000 -p 8089:8089 -p 9997:9997 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=SUPER-SECRET" splunk/splunk:latest   Then I went to settings, forwarding and receiving, Receive data, Configure receiving and made sure Liston On Port 9997 was enabled Added a new Username and new password Then I went to an ubuntu 22.04 I think and ran (ChatGPT aided in some of this)   * sudo su * useradd -m splunk * groupadd splunk (Which if memory serves it said group already existed) * export SPLUNK_HOME="/opt/splunkforwarder" * mkdir $SPLUNK_HOME * Then I cd'd into the splunk home directory * chown -R splunk:splunk $SPLUNK_HOME * wget -O splunkforwarder-9.0.5-Not Sure if these were account specific so removed them-linux-2.6-amd64.deb "https://download.splunk.com/products/universalforwarder/releases/9.0.5/linux/splunkforwarder-9.0.5-Not Sure if these were account specific so removed them-linux-2.6-amd64.deb" * dpkg -i /path/to/splunkforwarder_package_name.deb * chown -R splunk:splunk /opt/splunkforwarder * sudo -u splunk /opt/splunkforwarder/bin/splunk add forward-server My-IP-Address-To-Docker:9997 -auth New-Username:New-Password   * That then made me agree and enter the username and password I created for Splunk in Docker   * sudo -u splunk /opt/splunkforwarder/bin/splunk set deploy-poll IP-to-Docker:8089 * sudo -u splunk /opt/splunkforwarder/bin/splunk restart     * Then I go to settings, Add Data, Forward and I see There are currently no forwarders configured as deployment clients to this instance. * Also if I go to Forwarder management I see The forwarder management UI distributes deployment apps to Splunk clients. No clients or apps are currently available on this deployment server. What am I doing wrong?
I need to monitor all Windows servers to alert if there is a critical application got uninstalled. The simplest query would be searching for Event ID 11724 and compare the application name in "Mess... See more...
I need to monitor all Windows servers to alert if there is a critical application got uninstalled. The simplest query would be searching for Event ID 11724 and compare the application name in "Message" field. index=wineventlog EventCode="11724"  | search Message="*app_name*"  However, it will get lots of false positive that application updates/upgrades will automatically uninstall the application (Event ID 11724) and install it (Event ID 11707) within 5 mins(average).     My idea is to combine 2 event ID in a single query. Searching for uninstallation event of an application and if there is no installation event (11707) can be found within 5 mins. It returns True for alerting. But I did a quick study on subsearch or join, and has no idea how to create this query. Anyone got a better idea? 
hi team, I'm creating a query that I need to look for if a machine changed the password (Password_last_set) more than once for a period of 30 days. I'm not getting it, can you help me? example: if I... See more...
hi team, I'm creating a query that I need to look for if a machine changed the password (Password_last_set) more than once for a period of 30 days. I'm not getting it, can you help me? example: if I put the code below with earliest=-90d it brings me the 3 changes. more if I put | search count > 1 it doesn't gather the information and doesn't bring me the statistics. some help me?   Query: (Dont Work) index="main" source="wineventlog:security" EventCode=4742 user="TRBK8SPRD06$" | stats count earliest(_time) as firstTime latest(_time) as lastTime values(user) by Password_Last_Set, user, signature | convert timeformat="%d/%m/%Y %H:%M:%S" ctime(firstTime) | convert timeformat="%d/%m/%Y %H:%M:%S" ctime(lastTime) | search count > 1 WORK: index="main" source="wineventlog:security" EventCode=4742 user="TRBK8SPRD06$" | stats count earliest(_time) as firstTime latest(_time) as lastTime values(user) by Password_Last_Set, user, signature | convert timeformat="%d/%m/%Y %H:%M:%S" ctime(firstTime) | convert timeformat="%d/%m/%Y %H:%M:%S" ctime(lastTime) thanks in advanced
I have a table with columns "from" and "to", in which each row represents an edge between "from" and "to" nodes within a hierarchical tree. In this tree, any node can have any number of children, and... See more...
I have a table with columns "from" and "to", in which each row represents an edge between "from" and "to" nodes within a hierarchical tree. In this tree, any node can have any number of children, and arbitrary depth. The table rows are in no specific order. In my case, the tree contains ~50 nodes at max. We define P(X) as the path between the root node and node X, where X doesn't have to be a leaf node. Question: Using SPL language, how can I determine P(X) from the tabular data? The result would preferably be represented in the same tabular format, but containing only the edges on P(X). Any other representation is also fine, such as a string, mv field, table containing the nodes on the path, etc. Example: This tree with 12 edges...         A +--B | +--E | +--F | | +--K | | +--L | +--G +--C | +--H +--D +--I +--J +--M         ... would be represented by the table: from to A B A C A D B E B F B G F K F L C H D I D J J M (Note that the rows can be in any order) If we are looking for P(F), the resulting path would be "A-B-F", i.e. the following subset of above table: from to B F A B (Again, the order of the rows doesn't matter.) I would normally consider recursive tree searches (DFS, BFS) or at least loops, but these are not SPL-like approaches.
Hi guys how are you doing?   I'm reading this link Solved: How to use replace in search? - Splunk Community but I can't get results with what I want to do. From a search I get a field called "u... See more...
Hi guys how are you doing?   I'm reading this link Solved: How to use replace in search? - Splunk Community but I can't get results with what I want to do. From a search I get a field called "user_name" with the following format "DOMAIN\\\\USER" what I want to do is to replace \\\\ with only one \ and get "DOMAIN\USER"   If I use the query that I saw i the link attached I get this error   If I add one " I get this   How can I replace \\\\ for \ ?   Regards. Martín.
There are numerous questions/answers about extracting nested JSON data, but none of those answers seem to apply to what I am attempting to do. Given the following JSON data as indexed by Splunk: { ... See more...
There are numerous questions/answers about extracting nested JSON data, but none of those answers seem to apply to what I am attempting to do. Given the following JSON data as indexed by Splunk: { "disks": { "nvme0n1": { "model": "PC401 NVMe SK hynix 512GB", "serial": "123", "size": "476.94 GiB", "size_bytes": 512110190592, "type": "ssd", }, "sda": { "model": "SK hynix SC401 S", "serial": "456", "size": "953.87 GiB", "size_bytes": 1024209543168, "type": "ssd", "vendor": "ATA", }, "sdb": { "model": "SD/MMC CRW", "serial": "789", "size": "0 bytes", "size_bytes": 0, "type": "hdd", "vendor": "Generic-" }, } } I want to produce a table like this:   host disk model serial size type -------------------------------------------------------------------------------- myhost.example.org nvme0n1 PC401 NVMe SK hynix 512GB 123 476.94 GiB ssd myhost.example.org sda SK hynix SC401 S 456 953.87 GiB ssd myhost.example.org sdb SD/MMC CRW 789 0 bytes hdd -------------------------------------------------------------------------------- I can go after an individual disk, like so: search … | dedup host | spath output=disk "disks.sda" | mvexpand disk | spath input=disk | table host model serial size type …but how to perform this step for each disk in the disks array eludes me. Does anyone have any solutions? A related question: where is SPL documented to such a degree where one could reasonably understand how to perform this type of extraction? Splunk documents the individual commands, but doesn’t really explain how to tie them together to create more complex actions, and the Exploring Splunk: Search Processing Language (SPL) Primer and Cookbook doesn’t even come close to explaining how to perform a complex action like this. Are there others tutorials/primers?
I am new to using Splunk and having some difficulties with the search query logic. I want to create a dashboard that displays the results of a condition being met, only if another condition is true. ... See more...
I am new to using Splunk and having some difficulties with the search query logic. I want to create a dashboard that displays the results of a condition being met, only if another condition is true. Example: if "PropertyOne"=true and "PropertyTwo"=5, return the instances where both of these conditions are met. I have tried using the if, match, and case functions, but I do not think I am using them correctly.   Search formats I've tried: eval err=if("PropertyOne"=true, "PropertyTwo"=5) if("PropertyOne"=false AND "PropertyTwo"=5) eval err=if(match("PropertyOne"=false AND "PropertyTwo"=5), 1,0) <-- Here I added 1 and 0 because I didn't know what else to put in the other two slots needed for the "if" function. eval err=case("PropertyOne"=true AND "PropertyTwo"=5)
Our requirements are to have readily searchable data for 12 months and 'cold store' of data for an additional 18 mths (30 mths total).  Ingest Actions seems like the obvious choice since it can write... See more...
Our requirements are to have readily searchable data for 12 months and 'cold store' of data for an additional 18 mths (30 mths total).  Ingest Actions seems like the obvious choice since it can write to an S3 bucket and compress the data in a format easily re-ingested or passed to a 3rd party if needed.  However, the ingest actions seem to only work given you apply the ruleset to a sourcetype.  Given that there may be a hundred or more sourcetypes, this is a little onerous.  Is there a method to accomplish this w/o creating a ruleset for every sourcetype?
I'm trying to setup the splunk OTEL forwarder (https://github.com/signalfx/splunk-otel-collector) on an AKS cluster to forward all pod logs to splunk with a hec token.  I'm deploying the forwarder us... See more...
I'm trying to setup the splunk OTEL forwarder (https://github.com/signalfx/splunk-otel-collector) on an AKS cluster to forward all pod logs to splunk with a hec token.  I'm deploying the forwarder using a helm and have trimmed down the sample values.yaml file to just log forwarding, I don't want/need any metrics forwarded or use any signalfx component.  Fluentd daemon set is getting deployed, no errors getting logged, but I'm not seeing any logs on the splunk side.  Is there sample a values.yaml file that can be referenced?  I've seen the sample on github but it seems a little too simplified.  I know the cluster has connectivity to Splunk and there was a prior tool that was able to forward logs successfully. Any advantages/disadvantages to using fluentd or otel?  Thanks !
Hi! from the documentation  https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Whitelistorblacklistspecificincomingdata the whitelist and blacklist option only works with the filenames... See more...
Hi! from the documentation  https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Whitelistorblacklistspecificincomingdata the whitelist and blacklist option only works with the filenames of logs. Is there an option for data within the log file? eg: from this extract fo /var/log/messages: May 28 18:00:01 xxxxxxxxxx kernel: type=1110 audit(1685311201.838:180500): pid=19649 uid=0 auid=0 ses=24140 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_localuser,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success' May 28 18:00:01 xxxxxxxxxx CROND[19681]: (root) CMD (/usr/lib64/sa/sa1 1 1) May 28 18:00:01 xxxxxxxxxx kernel: type=1104 audit(1685311201.905:180501): pid=19649 uid=0 auid=0 ses=24140 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_localuser,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success' May 28 18:00:01 xxxxxxxxxx kernel: type=1106 audit(1685311201.941:180502): pid=19649 uid=0 auid=0 ses=24140 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success' May 28 18:00:01 svr-spl-mat-01 systemd: Removed slice User Slice of root. May 28 18:00:02 svr-spl-mat-01 snmpd[1359]: Connection from UDP: [ xxxxxxxxxx]:50765->[ xxxxxxxxxx]:161 May 28 18:00:02 svr-spl-mat-01 snmpd[1359]: Connection from UDP: [ xxxxxxxxxx]:50765->[ xxxxxxxxxx]:161 May 28 18:00:02 svr-spl-mat-01 snmpd[1359]: Connection from UDP: [10.138.211.15]:50765->[ xxxxxxxxxx]:161 i will like to blacklist all the snmpd events. the file used is just an example, the real file is from an application but with sensitive data that i dont want to get into splunk.   Regards.          
I'm running a search coming back with the following logic, the idea here is there I have a bunch of events but I want to use the bin command want span=60m to actually be respective to the time I'm se... See more...
I'm running a search coming back with the following logic, the idea here is there I have a bunch of events but I want to use the bin command want span=60m to actually be respective to the time I'm searching ( from 9:27 PM to 10:27PM), instead of snapping to the hour.  Similar to how when we do earliest=-1h@h, that snaps to the hour and earliest=-60m@m, snaps to the minute, is there a way I can have the bin command snap to the minute and not the hour?    
Hi there, I have spent 3 days looking for an answer with no luck, I'm hoping that someone here can help. I want to create one panel with one chart. I want the chart to have multiple lines that ar... See more...
Hi there, I have spent 3 days looking for an answer with no luck, I'm hoping that someone here can help. I want to create one panel with one chart. I want the chart to have multiple lines that are created by different queries. I want to have a checkbox for each query/line, so that when you check the box, its corresponding query gets run and the resulting line appears. Git example let's say I have the following queries: 1) Search message = "abcd" | timechart count AS abcd 2) Search message = "efgh" | timechart count AS efgh 3) Search message = "ijkl" | timechart count AS ijkl And so on... I want these checkboxes: A, E, I....  let's say When none of the boxes are check I don't want to see any lines on the chart. When I click A I want to see the data for abcd search appear on the chart. When I click on E, I want to see the efgh line appear on top of the abcd line. When I click I, I want to see the ijkl line appear on top of the other two. If I uncheck E, I want to see efgh line disappear but other two remain... You get the idea.   I want to add as many queries as I want, and have a checkbox for each query, and show the result line of that query on top of other lines when I click on its checkbox.   Is this possible? I'd appreciate any help for this. Many thanks, Skye    
Currently, I can download a report for overall incoming plus outgoing calls, total number of minutes and average call duration, but I would like to download separate report for incoming and outgoing ... See more...
Currently, I can download a report for overall incoming plus outgoing calls, total number of minutes and average call duration, but I would like to download separate report for incoming and outgoing calls with respective total number of minutes and average call duration.  Could you please suggest how to extract this report?
Hi, I am trying to install Splunk SOAR 6.0.1 for Linux. I've followed the prerequisites here: https://docs.splunk.com/Documentation/SOARonprem/6.0.1/Install/InstallUnprivileged and built a VM runni... See more...
Hi, I am trying to install Splunk SOAR 6.0.1 for Linux. I've followed the prerequisites here: https://docs.splunk.com/Documentation/SOARonprem/6.0.1/Install/InstallUnprivileged and built a VM running CentOS 7.9. I've run the prepare script as above too and everything came back fine (I'm not running in FIPS mode, this is for a home lab). I then run the install script with --ignore-warnings because it keeps shouting about the need for a 500GB disk, the disk attached to the VM is 500GB, but it thin provisioned in VMware ESXi v8.0.0. The install goes ok and then I get the below error message when it tries to start Splunk SOAR. [splunksoar-adm@NEST-Splunk-SOAR-01 splunk-soar]$ sudo ./soar-install --splunk-soar-home /opt/splunk-soar --https-port 8443 --ignore-warnings [sudo] password for splunksoar-adm: Detailed logs will be located at /opt/splunk-soar/var/log/phantom/phantom_install_log Starting install of Splunk SOAR 6.0.1.123902 Skipping pre-deploy phase; continuing from StartPhantom ================================================================================ You are about to install Splunk SOAR version 6.0.1.123902. - Installation path: /opt/splunk-soar - HTTPS port: 8443 Do you wish to proceed? (y/N): y ================================================================================ INSTALL: StartPhantom Starting Splunk SOAR Failed to start Splunk SOAR Traceback (most recent call last): File "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/console.py", line 207, in run proc = subprocess.run(normalized_cmd, **cmd_args) # noqa: PHANTOM112 File "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/usr/python39/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/opt/splunk-soar/bin/start_phantom.sh']' returned non-zero exit status 1. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/./soar-install", line 72, in main deployment.run() File "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/deployments/deployment.py", line 132, in run self.run_deploy() File "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/usr/python39/lib/python3.9/contextlib.py", line 79, in inner return func(*args, **kwds) File "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/deployments/deployment.py", line 193, in run_deploy operation.run() File "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/operations/deployment_operation.py", line 135, in run self.install() File "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/operations/tasks/start_phantom.py", line 18, in install self.shell.start_phantom() File "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/console.py", line 302, in start_phantom self.run( File "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/console.py", line 224, in run raise InstallError( install.install_common.InstallError: Failed to start Splunk SOAR install failed. Below is all the messages from the log file at the time of running the command. {"component": "installation_log", "time": "2023-06-01T20:00:46.952514", "logger": "install", "pid": 536, "level": "INFO", "file": "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/install_log/logger.py", "line": 52, "message": "Detailed logs will be located at /opt/splunk-soar/var/log/phantom/phantom_install_log", "install_run_uuid": "7af5f4ed-9863-488f-a0c1-fe2818588257"} {"component": "installation_log", "time": "2023-06-01T20:00:49.494291", "logger": "install.deployments.deployment", "pid": 536, "level": "INFO", "file": "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/deployments/deployment.py", "line": 101, "message": "Starting install of Splunk SOAR 6.0.1.123902", "install_run_uuid": "7af5f4ed-9863-488f-a0c1-fe2818588257", "start_time": "2023-06-01T20:00:49.494112", "install_mode": "install", "installed_version": "6.0.1.123902", "proposed_version": "6.0.1.123902", "deployment_type": "unpriv", "continue_from": "StartPhantom", "time_elapsed_since_start": 0.000421} {"component": "installation_log", "time": "2023-06-01T20:00:49.494734", "logger": "install.deployments.deployment", "pid": 536, "level": "INFO", "file": "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/deployments/deployment.py", "line": 128, "message": "Skipping pre-deploy phase; continuing from StartPhantom", "install_run_uuid": "7af5f4ed-9863-488f-a0c1-fe2818588257", "start_time": "2023-06-01T20:00:49.494112", "install_mode": "install", "installed_version": "6.0.1.123902", "proposed_version": "6.0.1.123902", "deployment_type": "unpriv", "continue_from": "StartPhantom", "time_elapsed_since_start": 0.000697} {"component": "installation_log", "time": "2023-06-01T20:00:49.503321", "logger": "install.deployments.deployment", "pid": 536, "level": "INFO", "file": "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/deployments/deployment.py", "line": 91, "message": "\n\n================================================================================\nYou are about to install Splunk SOAR version 6.0.1.123902.\n - Installation path: /opt/splunk-soar\n - HTTPS port: 8443\n", "install_run_uuid": "7af5f4ed-9863-488f-a0c1-fe2818588257", "start_time": "2023-06-01T20:00:49.494112", "install_mode": "install", "installed_version": "6.0.1.123902", "proposed_version": "6.0.1.123902", "deployment_type": "unpriv", "continue_from": "StartPhantom", "phase": "deploy", "time_elapsed_since_start": 0.009425} {"component": "installation_log", "time": "2023-06-01T20:00:52.228354", "logger": "install.operations.deployment_operation", "pid": 536, "level": "DEBUG", "file": "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/operations/deployment_operation.py", "line": 123, "message": "Starting install task operation", "install_run_uuid": "7af5f4ed-9863-488f-a0c1-fe2818588257", "start_time": "2023-06-01T20:00:49.494112", "install_mode": "install", "installed_version": "6.0.1.123902", "proposed_version": "6.0.1.123902", "deployment_type": "unpriv", "continue_from": "StartPhantom", "phase": "deploy", "operation_start_time": "2023-06-01T20:00:52.228275", "operation_name": "StartPhantom", "operation_status": "started", "operation_type": "task", "operation_cluster_phase": "ClusterPhase.NONE", "time_elapsed_since_start": 2.734319, "time_elapsed_since_operation_start": 0.000164} {"component": "installation_log", "time": "2023-06-01T20:00:52.228635", "logger": "install.console", "pid": 536, "level": "INFO", "file": "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/console.py", "line": 301, "message": "Starting Splunk SOAR", "install_run_uuid": "7af5f4ed-9863-488f-a0c1-fe2818588257", "start_time": "2023-06-01T20:00:49.494112", "install_mode": "install", "installed_version": "6.0.1.123902", "proposed_version": "6.0.1.123902", "deployment_type": "unpriv", "continue_from": "StartPhantom", "phase": "deploy", "operation_start_time": "2023-06-01T20:00:52.228275", "operation_name": "StartPhantom", "operation_status": "started", "operation_type": "task", "operation_cluster_phase": "ClusterPhase.NONE", "time_elapsed_since_start": 2.734676, "time_elapsed_since_operation_start": 0.000518} {"component": "installation_log", "time": "2023-06-01T20:00:52.229350", "logger": "install.console", "pid": 536, "level": "DEBUG", "file": "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/console.py", "line": 204, "message": "Running subprocess", "install_run_uuid": "7af5f4ed-9863-488f-a0c1-fe2818588257", "start_time": "2023-06-01T20:00:49.494112", "install_mode": "install", "installed_version": "6.0.1.123902", "proposed_version": "6.0.1.123902", "deployment_type": "unpriv", "continue_from": "StartPhantom", "phase": "deploy", "operation_start_time": "2023-06-01T20:00:52.228275", "operation_name": "StartPhantom", "operation_status": "started", "operation_type": "task", "operation_cluster_phase": "ClusterPhase.NONE", "log_type": "subprocess", "command": "/opt/splunk-soar/bin/start_phantom.sh", "environment_variables": {"PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "HOME": "/root"}, "time_elapsed_since_start": 2.735282, "time_elapsed_since_operation_start": 0.001123} {"component": "installation_log", "time": "2023-06-01T20:00:52.252023", "logger": "install.console", "pid": 536, "level": "DEBUG", "file": "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/console.py", "line": 250, "message": "Subprocess completed.", "install_run_uuid": "7af5f4ed-9863-488f-a0c1-fe2818588257", "start_time": "2023-06-01T20:00:49.494112", "install_mode": "install", "installed_version": "6.0.1.123902", "proposed_version": "6.0.1.123902", "deployment_type": "unpriv", "continue_from": "StartPhantom", "phase": "deploy", "operation_start_time": "2023-06-01T20:00:52.228275", "operation_name": "StartPhantom", "operation_status": "started", "operation_type": "task", "operation_cluster_phase": "ClusterPhase.NONE", "log_type": "subprocess", "command": "/opt/splunk-soar/bin/start_phantom.sh", "environment_variables": {"PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "HOME": "/root"}, "status": "failed", "exit_code": 1, "stdout": ["Error: cannot run as a superuser"], "stderr": [], "time_elapsed_since_start": 2.758061, "time_elapsed_since_operation_start": 0.023908} {"component": "installation_log", "time": "2023-06-01T20:00:52.252605", "logger": "install.operations.deployment_operation", "pid": 536, "level": "DEBUG", "file": "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/operations/deployment_operation.py", "line": 142, "message": "Completed install task operation", "install_run_uuid": "7af5f4ed-9863-488f-a0c1-fe2818588257", "start_time": "2023-06-01T20:00:49.494112", "install_mode": "install", "installed_version": "6.0.1.123902", "proposed_version": "6.0.1.123902", "deployment_type": "unpriv", "continue_from": "StartPhantom", "phase": "deploy", "operation_start_time": "2023-06-01T20:00:52.228275", "operation_name": "StartPhantom", "operation_status": "failed", "operation_type": "task", "operation_cluster_phase": "ClusterPhase.NONE", "time_elapsed_since_start": 2.758546, "time_elapsed_since_operation_start": 0.024388} {"component": "installation_log", "time": "2023-06-01T20:00:52.253022", "logger": "install", "pid": 536, "level": "DEBUG", "file": "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/meta.py", "line": 224, "message": "Adding deployment state to metadata", "continue_from": "StartPhantom", "cluster_phase": "ClusterPhase.NONE", "install_run_uuid": "7af5f4ed-9863-488f-a0c1-fe2818588257", "start_time": "2023-06-01T20:00:49.494112", "install_mode": "install", "installed_version": "6.0.1.123902", "proposed_version": "6.0.1.123902", "deployment_type": "unpriv", "time_elapsed_since_start": 2.758997} {"component": "installation_log", "time": "2023-06-01T20:00:52.254129", "logger": "install", "pid": 536, "level": "ERROR", "file": "/home/splunksoar-adm/Splunk-SOAR/splunk-soar/./soar-install", "line": 95, "message": "Failed to start Splunk SOAR", "install_run_uuid": "7af5f4ed-9863-488f-a0c1-fe2818588257", "start_time": "2023-06-01T20:00:49.494112", "install_mode": "install", "installed_version": "6.0.1.123902", "proposed_version": "6.0.1.123902", "deployment_type": "unpriv", "continue_from": "StartPhantom", "time_elapsed_since_start": 2.762006, "pretty_exc_info": ["Traceback (most recent call last):", " File \"/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/console.py\", line 207, in run", " proc = subprocess.run(normalized_cmd, **cmd_args) # noqa: PHANTOM112", " File \"/home/splunksoar-adm/Splunk-SOAR/splunk-soar/usr/python39/lib/python3.9/subprocess.py\", line 528, in run", " raise CalledProcessError(retcode, process.args,", "subprocess.CalledProcessError: Command '['/opt/splunk-soar/bin/start_phantom.sh']' returned non-zero exit status 1.", "", "During handling of the above exception, another exception occurred:", "", "Traceback (most recent call last):", " File \"/home/splunksoar-adm/Splunk-SOAR/splunk-soar/./soar-install\", line 72, in main", " deployment.run()", " File \"/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/deployments/deployment.py\", line 132, in run", " self.run_deploy()", " File \"/home/splunksoar-adm/Splunk-SOAR/splunk-soar/usr/python39/lib/python3.9/contextlib.py\", line 79, in inner", " return func(*args, **kwds)", " File \"/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/deployments/deployment.py\", line 193, in run_deploy", " operation.run()", " File \"/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/operations/deployment_operation.py\", line 135, in run", " self.install()", " File \"/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/operations/tasks/start_phantom.py\", line 18, in install", " self.shell.start_phantom()", " File \"/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/console.py\", line 302, in start_phantom", " self.run(", " File \"/home/splunksoar-adm/Splunk-SOAR/splunk-soar/install/console.py\", line 224, in run", " raise InstallError(", "install.install_common.InstallError: Failed to start Splunk SOAR"]} No idea what's causing it to fail and can't find anything online. Let me know if you need more info, any help will be appreciated. Cheers Rob
So i am trying to compare bar graphs for event count for our indexes for two separate days. We are upgrading our environment, and I was wanting this query to show us the event count before and after ... See more...
So i am trying to compare bar graphs for event count for our indexes for two separate days. We are upgrading our environment, and I was wanting this query to show us the event count before and after we upgrade. I am have tried using the earliest=-<int>d and latest=-<int>d, but the query keeps using the time picker. I am using dbinspect, so i wasn't sure if that had something to do with it. Below is the working query that outputs the same results for both EventCount and EventCount_1     |dbinspect index=* | search index!=_* | fields bucketId eventCount index _time | stats sum(eventCount) as EventCount values(max(_time)) as Time by index | table index EventCount, | join type=outer index [| dbinspect index=* | search index!=_* | fields bucketId eventCount index | stats sum(eventCount) as EventCount_1 by index | table index EventCount_1] | table index EventCount EventCount_1       I have tried putting the the time periods in a few places, after the first index, in which the query runs, but returns the same results using the time from the time picker. If i place it after the search, I dont get any results.      |dbinspect index=* earliest=-4d latest=-3d | search index!=_* | fields bucketId eventCount index _time | stats sum(eventCount) as EventCount values(max(_time)) as Time by index | table index EventCount, | join type=outer index [| dbinspect index=* | search index!=_* earliest=2023-05-30T00:00:00 latest=2023-06-01T23:59:59 | fields bucketId eventCount index | stats sum(eventCount) as EventCount_1 by index | table index EventCount_1] | table index EventCount EventCount_1     ^  this is also a working query, but it still uses the time from time picker instead of the stated one in query ^ Am I supposed to be using a different type of time selection with the dbinspect? If i don't use dbinspect, I don't get the same results. Is there any other way to get these results? I'm just trying to get event count by index. Thank you for any help.