All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I have a single value visualisation added in a dashboard. Its background colour depends on the value shown. (Green for 'Pass' and red for 'Fail'). But somehow it's always giving red backgrou... See more...
Hi all, I have a single value visualisation added in a dashboard. Its background colour depends on the value shown. (Green for 'Pass' and red for 'Fail'). But somehow it's always giving red background eventhough the value is 'Pass'. Here is the code I use: ``` <panel depends="$hide_css$"> <html> <style> #verdict rect { fill: $verdict_background$ !important; } #verdict text { fill: $verdict_foreground$ !important; } </style> </html> </panel> <panel> <single id="verdict"> <search> <query>index=temp_index | search splunk_id=$splunk_id$ | eval ver = verdict.$campaigns_included$ | table verdict.$campaigns_included$ </query> <done> <eval token="verdict_background">if($result.ver$=="Pass", "green", "red")</eval> <set token="verdict_foreground">black</set> </done> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="height">60</option> <option name="rangeColors">["0x53a051","0xdc4e41"]</option> <option name="rangeValues">[0]</option> <option name="useColors">1</option> </single> </panel> ``` $campaigns_included$ is the value that's chosen on a dropdown. Pls help, any help would be appreciated. @bowesmana requesting for expertise here!
Hello, I have been building a dashboard in dashboard studio and was looking for some help wrt implementing the fields option in XML dashboard in dashboard studio If my panel in XML dashboard has ... See more...
Hello, I have been building a dashboard in dashboard studio and was looking for some help wrt implementing the fields option in XML dashboard in dashboard studio If my panel in XML dashboard has like 5 fields and I want to display only 3 in the table, we can use this <fields>[field1,field2,field3]</fields> However when I download the panel results, I will still be able to view all the 5 fields with this   I am looking to do something same in dashboard studio and I am not able to get the functionality. I have tried using header option in Table option and didn't get any success.   { "type": "splunk.table", "options": { "tableFormat": { "headerBackgroundColor": "#0E6162", "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)", "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)", "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)" }, "backgroundColor": "#ffffff", "table": "> [\"field1\",\"field2\"]", -> didnt work "headers": "> table | getField([\"field1\",\"field2\"])", -> didnt work "showInternalFields": false }, "dataSources": { "primary": "ds_TlbXBz5i" }, "context": {}, "showProgressBar": false, "showLastUpdated": false }
Hi all, I have a table with these fields -   Time/Date Key Robot Process Host Status Environment Type 01.01.2022 12:30:00 Key 1 Key 2 Key 3 Robot 1 Robot 2 Robot 3 Pr... See more...
Hi all, I have a table with these fields -   Time/Date Key Robot Process Host Status Environment Type 01.01.2022 12:30:00 Key 1 Key 2 Key 3 Robot 1 Robot 2 Robot 3 Process Claim Process Claim Process Claim Host W Host X Host Y Success Success Success Production Production Production Critical Critical Critical 01.01.2022 12:30:00 Key 4 Robot 4  Process Refund Host Z Success Production Critical 02.02.2022 11:30:00 Key 5 Robot 5 Process Tax Host V Failed Non-Production Minor   I want to split up the first row into three rows to show the data. Is there a way I can split these based on the Time/Date and the Process?  Ideally, I want to generate a new table like this - Time/Date Key Robot Process Host Status Environment Type 01.01.2022 12:30:00 Key 1     Robot 1     Process Claim   Host W   Success   Production   Critical   01.01.2022 12:30:00 Key 2 Robot 2 Process Claim Host X Success Production Critical 01.01.2022 12:30:00 Key 3 Robot 3 Process Claim Host Y Success Production Critical 01.01.2022 12:30:00 Key 4 Robot 4  Process Refund Host Z Success Production Critical 02.02.2022 11:30:00 Key 5 Robot 5 Process Tax Host V Failed Non-Production Minor    
Hi, I have a lot of event data, where every instance can be idendified by a unique ID. Every instance contains several activities. Some activities occur not only once. For some this is okay, but for... See more...
Hi, I have a lot of event data, where every instance can be idendified by a unique ID. Every instance contains several activities. Some activities occur not only once. For some this is okay, but for others I would like to add e.g. a "_2" at the end of the activity name for the second occurence of this activity. As this should be performed only for the second activity within the instance and only for some activities within all, I was not sure if it is possible to transform the data with SPL in the way I need it to be.   Thanks for your support!
Hi, I have multiple syslog collectors (practically a heavy forwarder that picks up logs from disk). I am struggling to find a way of setting a specific sourcetype for parts of this logs that are ... See more...
Hi, I have multiple syslog collectors (practically a heavy forwarder that picks up logs from disk). I am struggling to find a way of setting a specific sourcetype for parts of this logs that are picked up from disk. /data/syslog/ contains thousands of folders with IP adresses, and i want to set a specific sourcetype for lets say 100 of them... Ive tried using regex and whitelist, but it seems like two stanzas with the same name wont work:     [monitor:///data/syslog/tcp/.../*.log] sourcetype = rsyslog host_segment = 4 index = xxx_syslog blacklist = .*\.gz$ [monitor:///data/syslog/tcp/.../*.log] sourcetype = vmw-syslog host_segment = 4 index = xxx_syslog blacklist = .*\.gz$ whitelist = \/data\/syslog\/tcp\/(10\.21[1289]\.75\.\d+|10\.143\.15\.\d+|10\.21[01]\.70\.\d+|10\.250\.191\.50|10\.30\.221\.19[1-2]|11\.36\.1[128]\.\d+|10\.37\.12\.\d+|10\.45\.[12]\.\d+|10\.6[23]\.12.\d+|10\.63\.10\.20|10\.67\.(0|64)\.\d+|10\.67\.67\.67)\/     Any idea on how i can set an sourcetype using REGEX? (I can not rewrite the sourcetype on a heavy forwarder, because this data should be parsed and get a new sourcetype from an TA app (vmware esxilogs), and i cant parse data two times).
Hello, I have logs containing two fields "account" and "shard".  By doing "| table account shard" I created a table of two cols  and since the table can have repeating values like: account    ... See more...
Hello, I have logs containing two fields "account" and "shard".  By doing "| table account shard" I created a table of two cols  and since the table can have repeating values like: account                  shard 100                           21 100                           21 100                           8 101                           10 I did "| stats dc(shard) by account", which gives me: account                 shard 100                          2 101                          1 I have two such tables(before and after) of "account" vs "dc(shard)" and I want to compare them(get the diff in distinct no of shards for each account before and after), but struggling to do this.  Please guide me to get the result.  [I can explain anything thats unclear]
on 11th October we had 5 events, but we received only 2 email notification.   Below the 5 events of the alert for Yesterday (11th Oct)   1            2022-10-11 23:30:04 BST             View ... See more...
on 11th October we had 5 events, but we received only 2 email notification.   Below the 5 events of the alert for Yesterday (11th Oct)   1            2022-10-11 23:30:04 BST             View Results 2            2022-10-11 23:00:05 BST             View Results 3            2022-10-11 22:30:04 BST             View Results 4            2022-10-11 22:00:02 BST             View Results 5            2022-10-11 14:00:02 BST             View Results   But we received email notification only for 1st and 5th event. No email notification for 2nd 3rd and 4th. Could please help us for this discrepancy since we had Client impact and caused so many transactions failures and for issues event was generated but email was not trigged. Can help me how to resolve this issue Thank you, Veeru
Hi,   we are receiving logs from UF and syslog and now we have a request for forwarding particular raw windows event to another syslog server. Anybody have experience with something like this? 
I have below events/messages in my search result. There are 2 fields stack_trace and TYPE like below. I want to group the events and count them as shown below based on a particular text from stack_tr... See more...
I have below events/messages in my search result. There are 2 fields stack_trace and TYPE like below. I want to group the events and count them as shown below based on a particular text from stack_trace and TYPE field as below. Is it possible to group the messages based on 2 fields (TYPE,stack_trace)? I am using below query but I am stuck as to how to group by 2 fields.  Event 1     { TYPE: ABCD stack_trace : com.abc.xyz.package.ExceptionName: Missing A. at random.package.w(DummyFile1:45) at random.package.x(DummyFile2:64) at random.package.y(DummyFile3:79) }       Event 2     { TYPE: XYZ stack_trace : com.abc.xyz.package.ExceptionName: Missing B. at random.package.w(DummyFile1:45) at random.package.x(DummyFile2:64) at random.package.y(DummyFile3:79) }       Expected Output     TYPE Exception Count ABCD Missing A 3 ABCD Missing B 4 XYZ Missing A 6 XYZ Missing B 1       Query I am using but incomplete      BASE_SEARCH | rex field= _raw "Exception: (?<Exception>[^\.\<]+)" | stats count as Count by "Exception"       Actual Output     Exception Count Missing A 3 Missing B 4 Missing c 6      
Hello I'm working on the setup of the alert when the disk space usage reaches above 80. However, I don't how to change in the query the partition that we need to check. The main partition is in... See more...
Hello I'm working on the setup of the alert when the disk space usage reaches above 80. However, I don't how to change in the query the partition that we need to check. The main partition is installed the Splunk service, however, i want to set the alert for another partition, the one that stores the logs. Here is the search for the alarm: | rest splunk_server_group=dmc_group_* /services/server/status/partitions-space | eval free = if(isnotnull(available), available, free) | eval usage = capacity - free | eval pct_usage = floor(usage / capacity * 100) | where pct_usage > 30 | stats first(fs_type) as fs_type first(capacity) AS capacity first(usage) AS usage first(pct_usage) AS pct_usage by splunk_server, mount_point | eval usage = round(usage / 1024, 2) | eval capacity = round(capacity / 1024, 2) | rename splunk_server AS Instance mount_point as "Mount Point", fs_type as "File System Type", usage as "Usage (GB)", capacity as "Capacity (GB)", pct_usage as "Usage (%)" And the result for the search:   And the partition that we need to monitoring is the next one: Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg00-root 1014M 84M 931M 9% / /dev/mapper/vg00-usr 4.0G 1.8G 2.3G 45% /usr /dev/sda1 1014M 192M 823M 19% /boot /dev/mapper/vg00-opt 10G 6.3G 3.8G 63% /opt /dev/mapper/vg01-splunk 32G 15G 18G 47% /var/log/splunk How can I change the query, so the search is done on the last partition? Regards!  
Hi all, i am finding average per day for some of the data. My search looks like this.     | bucket _time span=1d | stats distinct_count(Task) as counts by _time Group| stats avg(counts) as Aver... See more...
Hi all, i am finding average per day for some of the data. My search looks like this.     | bucket _time span=1d | stats distinct_count(Task) as counts by _time Group| stats avg(counts) as AverageCountPerDay by Group     I was able to get the results but the problem is the stats avg does not consider the day in which there is no "Task". I want my search to consider all the days. How can i achieve this?
Hello Everyone,  Please I need help with this. I would like to see the percentage of IPs that login from different countries. To get the total percentage of all the IPs from one country grouped... See more...
Hello Everyone,  Please I need help with this. I would like to see the percentage of IPs that login from different countries. To get the total percentage of all the IPs from one country grouped into one column. Not quite sure how to achieve this.  IP Country Percentage 2.3.1.2 BR 22% 2.4.3.1 BR 27%     Total=49% 1.2.3.4 CA 11% 1.1.3.2 CA 10% 1.2.3.2 CA 8%     Total=29% 6.5.3.2 IN 5% 6.4.2.1 IN 7% 6.2.3.1 IN 8% 5.7.9.8 IN 2%     Total=22%  
I have the following that should display the current free memory in a windows system. However, it appears that I am missing something.   index="perfmonmemory" | eval mem_free=mem_free/1024 | eval... See more...
I have the following that should display the current free memory in a windows system. However, it appears that I am missing something.   index="perfmonmemory" | eval mem_free=mem_free/1024 | eval mem_free=round(mem_free,0) | timechart count span=1min | bin _time span=1min | stats avg(mem_free) as rpm | gauge rpm 10 20 30 40 50 60
Hi there, I've been attempting to create a dashboard with metrics from the itsi_im_metrics index but am struggling with "instances" and the LogicalDisk.%_Free_Space metric. Using the following se... See more...
Hi there, I've been attempting to create a dashboard with metrics from the itsi_im_metrics index but am struggling with "instances" and the LogicalDisk.%_Free_Space metric. Using the following search, I can see the "instances" dimension are being used for each logical volume:     | mcatalog values(_dims) WHERE "index"="*" GROUPBY metric_name index instance | rename values(_dims) AS dimensions | table metric_name dimensions index instance     I can get a visualisation for each of the instances with the following and changing the C: to d: or E: respectively:     | mstats prestats=true avg(LogicalDisk.Free_Megabytes) WHERE (`itsi_entity_type_windows_metrics_indexes`) span=1m AND instance=C: | timechart span=1m avg(LogicalDisk.Free_Megabytes) as "Megabytes Free"     ...but I can't get all three of them (C:, d: and E:) into the same table like this: _time C: % free % free E: % free   Any tips or advice would be greatly appreciated! Cheers
I inherited this splunk instance that uses SAML , but when I add a "new" user  its configured as Authentication Method as "splunk"   All the other users have SAML as the Authentication Method.   ... See more...
I inherited this splunk instance that uses SAML , but when I add a "new" user  its configured as Authentication Method as "splunk"   All the other users have SAML as the Authentication Method.     Can anyone share some help in adding users in the SAML Authentication method ?
Two independent playbooks performing different automation tasks are triggered by the same event. The expectation is that both playbooks will start approximately at the same time however it was observ... See more...
Two independent playbooks performing different automation tasks are triggered by the same event. The expectation is that both playbooks will start approximately at the same time however it was observed that in some cases they start anywhere between 10sec to 50sec apart.  Is there some way to configure SOAR to run these 2 playbooks synchronously?   First playbook start time: 2022-10-12T15:07:40.773325Z: Starting playbook 'core/SGs Link Verification (id: 121, version: 14, pyversion: 3, scm id: 10)' on event '1811' with playbook run id: 513, running as user '2' with scope 'new'   Second playbook start time: 2022-10-12T15:08:32.483185Z: Starting playbook 'core/Limit SGs Run Time (id: 122, version: 10, pyversion: 3, scm id: 10)' on event '1811' with playbook run id: 514, running as user '2' with scope 'new'        
We have implemented the Splunk Add-On for Google Workspace (https://splunkbase.splunk.com/app/5556) in our Splunk environment. We used this documentation for the implementation: https://docs.splun... See more...
We have implemented the Splunk Add-On for Google Workspace (https://splunkbase.splunk.com/app/5556) in our Splunk environment. We used this documentation for the implementation: https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Configureinputs1 Currently, we are successfully getting logs with the sourcetype "gws:gmail", which is good. However, we are not getting logs for the other sourcetypes: gws:reports:admin gws:reports:calendar gws:reports:context_aware_access gws:reports:drive gws:reports:gcp gws:reports:login gws:reports:oauthtoken gws:reports:saml In looking at the _internal index, we see the following error: 2022-10-06 18:45:36,130 ERROR pid=32667 tid=MainThread file=activity_report.py:stream_events:140 | Exception raised while ingesting data for activity report: . Traceback: Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_Google_Workspace/bin/activity_report.py", line 133, in stream_events service, File "/opt/splunk/etc/apps/Splunk_TA_Google_Workspace/bin/gws_runner.py", line 97, in run_ingest proxies, File "/opt/splunk/etc/apps/Splunk_TA_Google_Workspace/bin/gws_request.py", line 116, in fetch_report rand=random.random, File "/opt/splunk/etc/apps/Splunk_TA_Google_Workspace/bin/gws_request.py", line 52, in _retry_request raise CouldNotAuthenticateException() gws_request.CouldNotAuthenticateException This appears to be a permissions/authentication issue. We have recreated the accounts and applied them to the inputs in the app. However this has not resolved the issue. At this stage we are trying to determine the appropriate permissions for the account needed to access the above sourcetypes. To clarify: we have 2 service accounts for this implementation.  One for Gmail (which is working), and the other for the other activity reports (which is not).  At this stage, I just need permissions/role/scope info for the non-working service account.  The troubleshooting documentation is somewhat confusing as to what is needed, specifically steps 1 and 6, which seem to contradict one another: Log into your Google Cloud service account. This service account cannot be an organization admin account. Copy Client ID of this service account Navigate to https://admin.google.com/ac/owl/domainwidedelegation. Check if the Client ID for your service account contains the https://www.googleapis.com/auth/admin.reports.audit.readonly scope. If it is not there, add your Client ID, and specify the https://www.googleapis.com/auth/admin.reports.audit.readonly scope. Navigate to https://console.cloud.google.com/iam-admin/iam. Check if the account you are using for the Username field contains the Organization Administrator role. Navigate to the Certificate field. Verify that you added the entire JSON file that you downloaded as a key for your service account. Save your changes." It seems like they have merged the instructions for two service accounts into one?  Apologies in advance if I am missing something simple - I think I may have gotten too far into the weeds on this one.
I am a student and I want to use Splunk for education purposes. I believe that 14 days of trial (in case of Cloud) or 60 days of trial (in case of Enterprise) not enough for me, so I'd like to use Fr... See more...
I am a student and I want to use Splunk for education purposes. I believe that 14 days of trial (in case of Cloud) or 60 days of trial (in case of Enterprise) not enough for me, so I'd like to use Free license, mentioned here.  So, the question: is Splunk Free license available only for Enterprise version? Or it can be used with Splunk Cloud too?
We need to know how to monitor lookups created inside splunk, checking if they are empty or with errors. We use REST (| rest /services/data/transforms/lookups | table eai:acl.app eai:appName filename... See more...
We need to know how to monitor lookups created inside splunk, checking if they are empty or with errors. We use REST (| rest /services/data/transforms/lookups | table eai:acl.app eai:appName filename title fields_list updated id) to bring up all Lookups, but how can we check if the lookups are wrong or empty?
For those of you who have installed SC4S in a Docker for Windows environment, what differences were there in the install (as opposed to any of the other environments in the documentation)?  Did you r... See more...
For those of you who have installed SC4S in a Docker for Windows environment, what differences were there in the install (as opposed to any of the other environments in the documentation)?  Did you run into any Docker for Windows roadblocks?  Any particular challenges?