All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am working on a very large dashboard (as per the requirements given to me) that is running slowly. I've managed to cut down on the number of panels significantly, and optimized the queries to the b... See more...
I am working on a very large dashboard (as per the requirements given to me) that is running slowly. I've managed to cut down on the number of panels significantly, and optimized the queries to the best of my abilities. I've tried looking into base searches but all the panels seem to have significantly different queries (most only having the index in common). I noticed that the dashboard runs fine when I am logged into splunk with my team's service account, and slower when I log in as myself. I believe this is expected because we have greater resources allocated towards our service account. If this is true, am I able to power the dashboard with the service account instead of the individual's account? I have created the dashboard with the service account but when I check the "Jobs" page, it says the owner of the queries is the individual. I also have the dashboard shared at the app level with read/write permissions set appropriately for my team.
When you print the summary of an investigation through ES it does not include notes.   Is there a way to add those?  Alternatively is there a way to use SPL to find those notes, artifacts, and events... See more...
When you print the summary of an investigation through ES it does not include notes.   Is there a way to add those?  Alternatively is there a way to use SPL to find those notes, artifacts, and events, to create a report from a custom dashboard?
I have the following splunk event: 2020-Jul-30 18:19:02.891Z level=DEBUG thread=https-jsse-nio-2720-exec-9 pid=20 code_location=c.x.p.service.WebhookEventServiceImpl request_id=1fPwftTa2ylVm7CbcwnBi... See more...
I have the following splunk event: 2020-Jul-30 18:19:02.891Z level=DEBUG thread=https-jsse-nio-2720-exec-9 pid=20 code_location=c.x.p.service.WebhookEventServiceImpl request_id=1fPwftTa2ylVm7CbcwnBirNhhjX trace_id=79d2157d38d3fd37 Processing message event[id=WH-29K757251Y0625428-0EP848134S044830M; resourceType=dispute; paypalDebugId=bac532dd23d05] using routingKey[com.xoom.paypal-events.v1.CUSTOMER.DISPUTE.UPDATED]. 1. I want to create a chart that aggregates by resourceType and routingKey. 2. I have the following command index="myindex" sourcetype="mySourceType" "Processing message event" | rex field=instance "routingKey\[(?<eventType>)\]\s" | chart count by resourceType eventType 3. The result I get is just by resourceType. I am not  able to assign the <eventType> variable to a field name for the chart. splunk chart    
Searches return errors like below. The indexers IPs returned seems to be changing on different attempts of the same search. The searches were run over long time periods like 15-20 days on an index wi... See more...
Searches return errors like below. The indexers IPs returned seems to be changing on different attempts of the same search. The searches were run over long time periods like 15-20 days on an index with relatively large number of events.   ---- 3 errors occurred while the search was executing. Therefore, search results might be incomplete. Hide errors. Unknown error for indexer: <INDEXER_IP1>. Search Results might be incomplete! If this occurs frequently, check on the peer. Unknown error for indexer: <INDEXER_IP2>. Search Results might be incomplete! If this occurs frequently, check on the peer. Server error ----   Inspecting the job says   ---- warn : Socket error during transaction. Socket error: Success error : Unknown error for indexer: <INDEXER_IP1>. Search Results might be incomplete! If this occurs frequently, check on the peer. error : Unknown error for indexer: <INDEXER_IP2>. Search Results might be incomplete! If this occurs frequently, check on the peer. ----   Related entries found from search.log for one of the indexer IPs   ---- 07-29-2020 05:46:53.900 INFO TcpOutbound - Received unexpected socket close condition with unprocessed data in RX buffer. Processing remaining bytes=5792 of data in RX buffer. socket_status="Connection closed by peer" paused=1 07-29-2020 05:47:00.543 ERROR HttpClientRequest - HTTP client error=Success while accessing server=http://<INDEXER_IP1>:8089 for request=http://<INDEXER_IP1>:8089/services/streams/search?sh_sid=1596001558.314297_64CB0758-30F3-4D5E-9CC0-DA1DD06754ED. 07-29-2020 05:47:09.734 WARN SearchResultParserExecutor - Socket error during transaction. Socket error: Success for collector=<INDEXER_IP1> ----   From some other discussion, I saw it maybe related to ulimit value. However Im not seeing any ulimit/thread/socket errors in splunkd.log.   ulimit -n value is 1024 (which I believe is a softlimit)on the indexers, but the splunk uses 100K as per the startup log and    ---- splunkd.log.3:07-08-2020 10:19:48.050 +0000 INFO ulimit - Limit: open files: 100000 files ----
On Dashboard load, when my initial value/default value is daily my token  <set token="tok_daily">$dailytoken$</set> is not set (not getting today's date automatically) where as this( <set token="tok_... See more...
On Dashboard load, when my initial value/default value is daily my token  <set token="tok_daily">$dailytoken$</set> is not set (not getting today's date automatically) where as this( <set token="tok_weekly">*</set>,) is set.   I have to change the dropdown to weekly and change it back to daily again for it to work properly. Can someone tell what am I doing wrong ?   <search> <query> | makeresults | eval today=strftime(now()+19800, "%m/%d/%Y") , currentweek=strftime(now()+19800, "%U-%Y") | table today currentweek </query> <done> <set token="dailytoken">$result.today$</set> <set token="weeklytoken">$result.currentweek$</set> </done> </search> <fieldset submitButton="false" autorun="true"> <input type="dropdown" token="custom_time_token" searchWhenChanged="true"> <label>Select Time Range</label> <choice value="daily">Daily</choice> <choice value="weekly">Weekly</choice> <choice value="monthly">Monthly</choice> <initialValue>daily</initialValue> <default>daily<default> <change> <condition value="daily"> <set token="tok_daily">$dailytoken$</set> <set token="tok_weekly">*</set> </condition> <condition value="weekly"> <set token="tok_weekly">$weeklytoken$</set> <set token="tok_daily">*</set> </condition> </change> </input> </fieldset>       basesearchstring | search scheduled_delivery_date ="$tok_daily$" AND week_of_year="$tok_weekly$" AND delivery_month="$tok_monthly$"     When I used those set token values in my base search  my result is like this      basesearchstring | search scheduled_delivery_date ="$tok_daily$" AND week_of_year=* AND delivery_month=*    
Hi,  I am attempting to update a notable. The notable allows us to identify if a AWS new user has been created via a API or via AWS Management Console. This is via the ingestion of the AWS CloudTri... See more...
Hi,  I am attempting to update a notable. The notable allows us to identify if a AWS new user has been created via a API or via AWS Management Console. This is via the ingestion of the AWS CloudTrial events logs in to our Splunk instance. We have a situation were a number of the AWS new users are being created in our Dev and Test accounts. I am attempting to filter out these specific events and only focus on the AWS new users being created in other accounts. The Dev and Test AWS accounts have there own specific 'arn' prefixes, which uniquely identify which AWS resources assigned to which account.  Could someone please provide some help as whether on right track with the revised SPL, should I being another attribute from the AWS CloudTrial logs or the 'arn' the right direction. index=aws sourcetype="aws:cloudtrail" (arn!="arn*xxxxxxxxxxxx*" OR arn!="arn*xxxxxxxxxxxx*") AND (eventName=CreateUser OR eventName=CreateLoginProfile OR eventName=CreateAccount) errorCode=success | rex field=userIdentity.arn ".*\/(?<src_user>.*)$" | rename requestParameters.accountName as account_name requestParameters.userName as user_name eventName as action | eval user = coalesce(account_name,user_name) | fields requestID src_user action user eventSource urgency Thanks again in advance, appreciate any assistance or guidance anyone can offer.
Hi Team, We are not getting logs from the minemeld. Getting below logs. 07-15-2020 06:07:24.917 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/mine... See more...
Hi Team, We are not getting logs from the minemeld. Getting below logs. 07-15-2020 06:07:24.917 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/minemeld_feed.py" ERRORHTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /servicesNS/nobody/Splunk_TA_paloalto/storage/collections/data/minemeldfeeds?query=%7B%22splunk_source%22%3A+%22Mine_Meld%22%7D (Caused by ReadTimeoutError("HTTPSConnectionPool(host='127.0.0.1', port=8089): Read timed out. (read timeout=30.0)",)) 07-15-2020 06:12:24.843 -0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Wed Jul 15 06:12:24 2020). Context: source=/opt/splunk/var/log/splunk/Splunk_TA_paloalto_minemeld_feed.log|host=ndcsecspkhfp51.global.loc|Splunk_TA_paloalto_minemeld_feed-too_small|730457    
Can I use the map command with the variable being the index and/or sourcetype? | makeresults | eval User = "12345", index = "index=_audit" | table User, index  | map search="search $index$ user="... See more...
Can I use the map command with the variable being the index and/or sourcetype? | makeresults | eval User = "12345", index = "index=_audit" | table User, index  | map search="search $index$ user="$User$" | table field_1, field_2"
So I have a search that is structured as follows  index=main <filtering for start and end events> OR <filtering for events within start and end events> | rex field=_raw "...<Rising_Node>..." | rex ... See more...
So I have a search that is structured as follows  index=main <filtering for start and end events> OR <filtering for events within start and end events> | rex field=_raw "...<Rising_Node>..." | rex field=_raw "...<Falling_Node>..." | transaction startswith="..." endswith="..." The rex fields work, the transaction works, etc. However, In the events within the transaction, it pulls from every host that fits the filtering. Basically, I want to limit the transaction to only include the hosts that are listed in the Falling_Node and Rising_Node fields. I've tried adding "host=Falling_Node OR host=Rising_Node" in the "filtering for events within start and end events" but it either clears all events out of the transaction or does nothing. Does anyone have tips?
I have worked on a query in order to generate a report that gives us the monthly visits and bandwidth used etc. I have listed the query below. It gives the results in the splunk environment. But when... See more...
I have worked on a query in order to generate a report that gives us the monthly visits and bandwidth used etc. I have listed the query below. It gives the results in the splunk environment. But when we generate a PDF document we see the error listed below the query, after the results table. I would appreciate suggestions to resolve this issue. |rex field=_raw "(\"|)(?<server_ip>\d+\.\d+\.\d+\.\d+)\s+(?<reqip>\d+\.\d+\.\d+\.\d+)" |rex field=_raw "(?<uri>\s+[\w\d\/\.]+-\S+)" |rex field=_raw "\s(?<status>\d+)\s(?<bytes>\d+)" |timechart span=1mon@mon dc(reqip) as "Unique Visitors", dc(uri) as Pages, sum(bytes) as Bandwidth(KB), count(uri) as Hits |eval Bandwidth(KB) = round('Bandwidth(KB)'/1024,2) |append [search index="med" sourcetype="med:httpaccess:log" *med.cms.gov* |rex field=_raw "(\"|)(?<server_ip>\d+\.\d+\.\d+\.\d+)\s+(?<reqip>\d+\.\d+\.\d+\.\d+)" |rex field=_raw "(?<uri>\s+[\w\d\/\.]+-\S+)" |rex field=_raw "\s(?<status>\d+)\s(?<bytes>\d+)" |bin span=1mon@mon _time |stats count(uri) as viewed by _time, reqip |stats sum(viewed) as "Number of Visits" by _time] |stats values(*) as * by _time |fillnull |addcoltotals label="Total" labelfield="_time" |table _time "Unique Visitors" "Number of Visits" Pages Hits Bandwidth(KB) The expected result is       
Hi all, how to migrate Splunk Heavy Forwarder to a new server?   Thanks.
We are using 100+ machines...Could you please help me in splunk search... The scenario is I am having 100 machines and I have to create an  alert for the machines that are not reporting for last 24 ... See more...
We are using 100+ machines...Could you please help me in splunk search... The scenario is I am having 100 machines and I have to create an  alert for the machines that are not reporting for last 24 hours.....Could you please help me in the coding part of it...
 Could not reach the vc to test creds this is the error i am getting after i add vcenter details: i have tried to login directly to vcenter using password and userid, it worked tried to curl, th... See more...
 Could not reach the vc to test creds this is the error i am getting after i add vcenter details: i have tried to login directly to vcenter using password and userid, it worked tried to curl, that also worked ping also worked telnet to 443 also worked Any other suggestions?
We are using v8.0.4 of Splunk Enterpise. In our authorize.conf I see roles are disabled. Examples: [role_sec_power_user] disabled = true [role_sec_admin_user] disabled = true [role_idx_data_user... See more...
We are using v8.0.4 of Splunk Enterpise. In our authorize.conf I see roles are disabled. Examples: [role_sec_power_user] disabled = true [role_sec_admin_user] disabled = true [role_idx_data_user] disabled = true I've looked through the spec file for authorize.conf and no where do I see the option to disable a role. Further, I don't see an option in the GUI to disable roles. Question: Is it possible to disable a role using this syntax above? Thanks
Why am I getting this error message "Signature mismatch between license slave and this License Master" where the "license slave" is the same host?  And I have the machine already configured to use a ... See more...
Why am I getting this error message "Signature mismatch between license slave and this License Master" where the "license slave" is the same host?  And I have the machine already configured to use a remote license server.  I've checked the server.conf file and it's set up correctly and matches all the other hosts that are not showing this error.
Hi,  Is it possible to use tokens in rex fields like this? | rex "\d{1,2}-\S{3}\s\d{2}:\d{2}:\d{2}.\d{3}\s\S{3}\s\[(?<ip2>$spec_ip$)\]\s%NICWIN-4-Security_560_Security[\S\s]+?(?<log_time2>(Jan|Fe... See more...
Hi,  Is it possible to use tokens in rex fields like this? | rex "\d{1,2}-\S{3}\s\d{2}:\d{2}:\d{2}.\d{3}\s\S{3}\s\[(?<ip2>$spec_ip$)\]\s%NICWIN-4-Security_560_Security[\S\s]+?(?<log_time2>(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s\d{2}\s\d{2}:\d{2}:\d{2})[\S\s]+?\S*Object\sName:\s(?<object_name2>[\S\s]+?)New\sHandle\sID[\S\s]+?Primary\sUser\sName:\s(?<username2>[\S\s]+?)\s+"
Hi, we want to use the Cisco WebEX Addon for Splunk but getting the following error: 2020-07-30 15:59:20,439 DEBUG pid=13947 tid=MainThread file=base_modinput.py:log_debug:288 | [-] WebEx Response: ... See more...
Hi, we want to use the Cisco WebEX Addon for Splunk but getting the following error: 2020-07-30 15:59:20,439 DEBUG pid=13947 tid=MainThread file=base_modinput.py:log_debug:288 | [-] WebEx Response: 'Incorrect user or password' 2020-07-30 16:00:18,794 INFO pid=14063 tid=MainThread file=setup_util.py:log_info:117 | Customized key can not be found The User we are using is working and we can connect to Webex in the Webbrowser. We believe it's our SSO Integration into ADFS which is causing the issue, because a pop up of our ADFS appears if we want to open the webex website and we have to enter the password there again. Is it possible that the addon doesn't work in a SSO environment? Thanks Ale
We enabled the TAXII feed and we see under Threat Intelligence Audit that the TAXII feed polling was starting. Where can I see the data itself?
In our environment we have three client machines which are running with the below mentioned versions: 6.2.15 6.3.14   In which one of the server is Windows 2008 Standard version 32 bit Op... See more...
In our environment we have three client machines which are running with the below mentioned versions: 6.2.15 6.3.14   In which one of the server is Windows 2008 Standard version 32 bit Operating System running with Splunk Universal Forwarder version 6.2.15. And the other two servers are Windows 2008 Standard version 64 bit Operating System running with Splunk Universal Forwarder version 6.3.14.   So we are currently with Splunk Cloud 7.2.9.1 version and we are planning to upgrade our core to 8.0 and above.   So these servers are acting as a Road blocker for upgrade. So  just want to know what is the difference between Windows Server 2008 Standard with Windows Server 2008 R2.   And also what is the recommended version which I can go and upgrade the Splunk Universal forwarder for those client machines which are running with Windows 2008 Standard version (32 bit as well as 64 bit). Kindly help on the same.
Hello, I am trying to onboard Defender ATP alerts using Microsoft Defender ATP Add-on for Splunk (https://splunkbase.splunk.com/app/4959/) but I can see certain alerts being onboarded multiple times... See more...
Hello, I am trying to onboard Defender ATP alerts using Microsoft Defender ATP Add-on for Splunk (https://splunkbase.splunk.com/app/4959/) but I can see certain alerts being onboarded multiple times. Has anyone else come across this type of issue before? Thanks, Revati