All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have logs which shows the job status ( Running, succeeded and failed) and all jobs have unique job id , now I want to calculate the duration it took to get failed or succeeded for each job id. Here... See more...
I have logs which shows the job status ( Running, succeeded and failed) and all jobs have unique job id , now I want to calculate the duration it took to get failed or succeeded for each job id. Here, all jobs id would have two event first one -running and second - succeeded or failed.  How it can be done   
I am looking to middle align the single value panel in my SPLUNK dashboard.
INFO [] () process='isValid', result='failed', dacNumber='[DAC_111_646]',  accountNumber=1122333 INFO [] () process='isValid', result='failed', dacNumber='[DAC_111_777]',  accountNumber=112... See more...
INFO [] () process='isValid', result='failed', dacNumber='[DAC_111_646]',  accountNumber=1122333 INFO [] () process='isValid', result='failed', dacNumber='[DAC_111_777]',  accountNumber=1122333 INFO [] () process='isValid', result='failed', dacNumber='[DAC_111_888]',  accountNumber=1122333  INFO [] () process='isValid', result='success', dacNumber='[DAC_111_777]',  accountNumber=1122333  INFO [] () process='isValid', result='success', dacNumber='[DAC_111_999]',  accountNumber=1122333  INFO [] () process='isValid', result='success', dacNumber='[DAC_111_646]',  accountNumber=1122333   How to get all failed dacNumber which never passed.  In the above example it should give me DAC_111_777. Please help.  
I am unable to start splunk it shows like access denied tell me to start splunk in windows
Hi, I'm facing issue while configuring SAML, using gsuite "Saml response does not contain group information"  Please give resolution for above error. Thanks sujith
I am working on using the same time range as an argument used in the Time range picker.  how do I do that? |metadata index=* type=hosts|eval First_Time=strftime(firstTime, "%Y-%d-%m %H:%M") Thi... See more...
I am working on using the same time range as an argument used in the Time range picker.  how do I do that? |metadata index=* type=hosts|eval First_Time=strftime(firstTime, "%Y-%d-%m %H:%M") This is my search query and I need the "firstTime" values to be the same value as used in the search head (i.e) if this search is run from 1st Nov to 30th Nov, I need the firstTime values also in this specified time range as given in the time-range picker.
Hi, Splunkers,   when I run a splunk search,  I use  NOT  string  to exclude result with this string. if I have a dashboard, how to add text or dropdown input to select this  string to exclude it ... See more...
Hi, Splunkers,   when I run a splunk search,  I use  NOT  string  to exclude result with this string. if I have a dashboard, how to add text or dropdown input to select this  string to exclude it from dashboard return? BTW, this string might not be a value of any field, just a random string.   Kevin
I was surprised by this result: In a field starting with a value that can be interpreted as an integer, groupby treats it lexically, but sort treats it numerically.  How does sort determine the inten... See more...
I was surprised by this result: In a field starting with a value that can be interpreted as an integer, groupby treats it lexically, but sort treats it numerically.  How does sort determine the intention?  Is there a syntax to force lexical sort? To illustrate, consider the following:   | makeresults | eval i = mvrange(-3, 4) | mvexpand i | eval i = printf("%+d", i) . "x" | stats count by i       As is (groupby only) i count +0x 1 +1x 1 +2x 1 +3x 1 -1x 1 -2x 1 -3x 1 Add |sort i i count -3x 1 -2x 1 -1x 1 +0x 1 +1x 1 +2x 1 +3x 1 In my use case, numeric sort is desired. (That was how I "discovered" this.)  Just curious about mechanism.
Hello, I am new to Splunk and working on getting our environment setup correctly.   I have a SC4S server setup and working.  My question is about UF installed on Windows servers and Windows AD serve... See more...
Hello, I am new to Splunk and working on getting our environment setup correctly.   I have a SC4S server setup and working.  My question is about UF installed on Windows servers and Windows AD servers.   Should the UF be setup to send info to the SC4S server or should they send them directly to the Splunk Indexer? Thanks,
I need to Forward All Windows Security/Application/system logs to 2 Separate Splunk instances with different Index names.   so  Security log ------- Index1 on serverA , Index2 on ServerB   in my I... See more...
I need to Forward All Windows Security/Application/system logs to 2 Separate Splunk instances with different Index names.   so  Security log ------- Index1 on serverA , Index2 on ServerB   in my Input.cof on my UF do i use Index=index1,Index2 Then in Output of HF send to Index_servers= ServerA/ServerB I need to make sure ServerB does not get hit with Index1     
Just want to ask if the approach in mapping groups/users in appdynamics controller is the same in Account Portal? I already configured the SAML federation and in azure.Then,  in Azure, I added a gro... See more...
Just want to ask if the approach in mapping groups/users in appdynamics controller is the same in Account Portal? I already configured the SAML federation and in azure.Then,  in Azure, I added a group and map in account portal for the SSO. However, users are still requires to to enter a password upon login. Is there something I missed? 
This is the current query,  but it's  not really providing the needed data for the search. index=main sourcetype=XmlWinEventLog EventCode=4624 Logon_Type=3 | transaction src maxspan=10m maxpause=2... See more...
This is the current query,  but it's  not really providing the needed data for the search. index=main sourcetype=XmlWinEventLog EventCode=4624 Logon_Type=3 | transaction src maxspan=10m maxpause=2m | stats dc(dest) as Dest_Count, values(dest) as Target_Systems by src | search Dest_Count >35 | sort - Dest_Count I really don't care about the Dest_Count >35 it was  and attempt to gather something  to start with. I was told to research the transaction command to obtain the required results.
Hi, My search result brings back a GUID in the ID field. The GUID refers to a customer. I would like it to reflect the customers name. Can I make a Splunk search do this every time? Best wishes Mi... See more...
Hi, My search result brings back a GUID in the ID field. The GUID refers to a customer. I would like it to reflect the customers name. Can I make a Splunk search do this every time? Best wishes Michael
Hi, I need to filter data to reduce my ingestion volume and for that I need to change below two files. But I don't see these file in any option in splunk cloud platform. I am attaching the settings ... See more...
Hi, I need to filter data to reduce my ingestion volume and for that I need to change below two files. But I don't see these file in any option in splunk cloud platform. I am attaching the settings menu in splunk cloud. I have sc_admin privileges but still I don't see these files anywhere. Can you please let me know where to find these files and how to edit them in splunk clould ?    File props.conf [httpevent] TRANSFORMS-t1=eliminate-okhttp3   Below need to edit in transforms.conf. [eliminate-okhttp3] REGEX = okhttp3 DEST_KEY = queue FORMAT = nullQueue Thanks, Dee
There is a SPL search, ending with stats that generates 300 events. Now that Search, lets call it "SEARCH-1" is saved as a 'saved search', and in the 'saved-search' one extra line is added, i.e. | ... See more...
There is a SPL search, ending with stats that generates 300 events. Now that Search, lets call it "SEARCH-1" is saved as a 'saved search', and in the 'saved-search' one extra line is added, i.e. | collect index=sec_apps_summary source="savedSearch_1d" And earliest , latest setting as -1@d and @d  . There is another SEARCH-2, that invokes the 'saved search' and the SPL starts like, | index=sec_apps_summary source="savedSearch_1d" .... What confuses me is, SEARCH-1 and SEARCH-2 should show same count of result, but I see 300 events for SEARCH-1 and very less 16 events for SEARCH-2. I suspect something about the way the 'saved search' is utilized , I quite don't understand the difference in result.  Any idea , why ?
HI All, We are facing and issue with Splunk Addon for AWS where the Configuration and Inputs page on UI isn't loading and it keeps on loading with a circle without getting the page.  We are obser... See more...
HI All, We are facing and issue with Splunk Addon for AWS where the Configuration and Inputs page on UI isn't loading and it keeps on loading with a circle without getting the page.  We are observing it after we did Splunk enterprise upgrade recently to 8.2.3.3 . We also updated the add-on to latest version - 5.2.1 Under Splunkd log we can see some REST errors . PFB  12-23-2021 07:40:25.606 -0500 ERROR AdminManagerExternal [11556 TcpChannelThread] - Unexpected error "<class 'splunklib.binding.HTTPError'>" from python handler: "HTTP 500 Internal Server E rror -- Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/urllib3/connectionpool.py", line 667, in urlopen\n self._prepare_proxy(conn)\n File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bi n/3rdparty/python3/urllib3/connectionpool.py", line 930, in _prepare_proxy\n conn.connect()\n File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/urllib3/connection.py", li ne 316, in connect\n self._tunnel()\n File "/opt/splunk/hf/lib/python3.7/http/client.py", line 931, in _tunnel\n message.strip()))\nOSError: Tunnel connection failed: 403 Forbidden\n \nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/solnlib/packa ges/requests/adapters.py", line 449, in send\n timeout=timeout\n File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/urllib3/connectionpool.py", line 725, in urlopen\n m ethod, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\n File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/urllib3/util/retry.py", line 439, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='0', port=8089): Max retries exceeded with url: /servicesNS/nobody/Splun k_TA_aws/configs/conf-aws_sqs_tasks/_reload (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden')))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunktaucclib/rest_handler/handler.   Error from python.log Traceback (most recent call last): File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/solnlib/splunk_rest_client.py", line 145, in request verify=verify, proxies=proxies, cert=cert, **kwargs) File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/solnlib/packages/requests/api.py", line 60, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/solnlib/packages/requests/sessions.py", line 533, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/solnlib/packages/requests/sessions.py", line 646, in send r = adapter.send(request, **kwargs) File "/opt/splunk/hf/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/solnlib/packages/requests/adapters.py", line 510, in send raise ProxyError(e, request=request) solnlib.packages.requests.exceptions.ProxyError: HTTPSConnectionPool(host='0', port=8089): Max retries exceeded with url: /servicesNS/nobody/Splunk_TA_aws/configs/conf-aws_sqs_tasks/_reload (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden')))   It has just impacted the UI, the inputs continues to work in the background.  Any help with this would be appreciated.  Regards, Sumeet  
Does anyone know how to register log event to another index by SPL. I'm assuming the answers like registering recodes from lookup file to an index executing a SPL regularly. If there any way like... See more...
Does anyone know how to register log event to another index by SPL. I'm assuming the answers like registering recodes from lookup file to an index executing a SPL regularly. If there any way like that, please give your answer.
ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=c:\logs\MailBoxAudit\mailboxaudit_23_12_2021_13_48.csv). Last time we saw this initcrc, filename was different.... See more...
ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=c:\logs\MailBoxAudit\mailboxaudit_23_12_2021_13_48.csv). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source.     inputs.conf   /opt/splunk/etc/deployment-apps/TA-Exchange-Mailbox/local/inputs.conf  [monitor://c:\logs\MailBoxAudit] whitelist=\.csv$|\.CSV$ sourcetype=csv index= indexname disabled=false crcSalt = <SOURCE> initCrcLength=8192 after making a change to the file inputs.conf. i ran the command /opt/splunk/bin/splunk reload deploy-server -class heavy_forwarders for the changes to be accepted the file comes to the index, but it does not start up some   what could be the problem ?        
Salesforceのログにて以下の要件でSPLを作成したいと考えております。   ①1週間以上 、 毎日複数回ログインを失敗しているユーザ  ②同一IP で複数のユーザ ID に対してログインロックされているユーザの検知   どのようなSPLをかけばよいでしょうか。