All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone,    I need  query to find out  sourcetype =gshshsh is using how much of data   1. From February month day wise usage.  and 2. From September to February month data  usage
Hi All, Is there any search query to find out the configurations for any particular app or index using splunk web UI?
Hi All,  Can someone please help me in masking data and regex? currently, we have an event where I need to mask certain data in a field extraction. I have already worked on the basic regex forSampl... See more...
Hi All,  Can someone please help me in masking data and regex? currently, we have an event where I need to mask certain data in a field extraction. I have already worked on the basic regex forSample1 | rex field=_raw "("PAE"\/)(?<Mask_Data>\d+\W\w+\d\s)"   but I am looking for a common or a separate regex for all the below samples and I want the events but mask the numbers before " : : " and after /  I am good I can get only the numbers masked in the tail.   EVENT Samples 1)  Request_URL=ghghghghghhghghghhghg/eeeee/xxx/functionalPAE/188888/WWEE1112: : 2) Request_URL=ghghghghghhghghghhghg/eeeee/xxx/functionalAssessment/188888/EEE3456823947 : : 3)Request_URL=ghghghghghhghghghhghg/eeeee/xxx/functionalAssessmentFromEEF/11111233 : : 4) Request_URL=ghghghghghhghghghhghg/eeeee/xxx/functionalAssessmentFromservices/1333/11233 : : Thanks in advance.
I know this is available at an application level, but is there a way to do it at a tier level so other tiers in the application are not affected? Or is there a cunning workaround where we could disa... See more...
I know this is available at an application level, but is there a way to do it at a tier level so other tiers in the application are not affected? Or is there a cunning workaround where we could disable it for all and then have a health rule so it still fires for the other tiers? Thanks! Jeremy.
Hello everyone,  We are using the Ta_nix add-on to get some logs from the Linux servers. But we notice that at the Monitor console when we run the Health Check we get this Alert That index... See more...
Hello everyone,  We are using the Ta_nix add-on to get some logs from the Linux servers. But we notice that at the Monitor console when we run the Health Check we get this Alert That index comes from that specific app and looks like is generation a lot of sourcetypes. I checked the documentation and I cannot see it as a know issue.  So I would like to know if this is an expected behavior or if there is any way we can fix this.  Splunk Enterprise: 8.2.2 - over x86_64 x86_64 GNU/Linux Splunk_TA_nix : 8.3.1   Thank you in advance
The basic issue I faced was a dashboard with prominent single-value visualisation what was to display a count of exceptions.  The users wanted 0 exceptions to be "good" color and a range of colors af... See more...
The basic issue I faced was a dashboard with prominent single-value visualisation what was to display a count of exceptions.  The users wanted 0 exceptions to be "good" color and a range of colors after that. To demonstrate, here is a simple test dashboard making use fo the excellent features of single-value viz.   <form> <label>test single value viz</label> <fieldset submitButton="false"> <input type="text" token="limit"> <label>limit</label> <default>2</default> <initialValue>2</initialValue> </input> </fieldset> <row> <panel> <single> <search> <query>| gentimes start=1/25/2022 end=1/26/2022 increment=1h | eval count=random()%$limit$ | eval _time=starttime | table _time count | timechart span=6h sum(count) as count</query> <earliest>-1h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> </row> </form>   Default limit of 2 will result in a viz showing a lovely blue background and some values and trendline depending on the random data generated. limit 20 will produce most likely an orange background limit 200 a red background All this is expected and in accordance with the default viz that was produced by using the "Save As Dashboard Panel" option from the base window. A limit of 1 - which results in all data values of 0 gives the green background.  This is still expected. Where I struggle is the limit of 0 (or less) which will give no data as number % 0 is undefined.  The data for such a search has no values in the count column.    So what to do?  The single value viz has decided that null values are nearer max value than min value which makes sense if you use dafault colors because max value is colored red.  But if in your situation your low values are more abberent and you consider null values are abberations you'd want to have the nulls colored like your min value.  Also strange though si the value on the chart shows 0, even if all the values in the data set are null.  Suddenly null became 0 and not undefined, and thus 0 is treated as higher than max instead of lower than min.  I find this to be a mistake - either it's treated as 0 so color it as 0 and show it as 0 or it's treated as null so colour it as null and show it as null (or undefinied or something other than 0) The only workaround I could find (without looking at css chnages) is a bit ugly and may not suit all situations. I cludge the upper limit to some value "higher than I could ever reach" (famous last words) and stick the colour I want to display for no data there.   <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41","0x53a051"]</option> <option name="rangeValues">[0,30,70,100,100000000]</option>   In the real world situation I had, zero values was considered good, and no data at all is also good so the quick fix of the viz as above was enough to allow users to visualise the date. A better solution is to change the base search to something that always returned a 0  rather than null or add a line after the timechart to force nulls to an acceptable value.   I like the latter as it's far more clear what's going on.   | eval count=coalesce(count ,0)   When no data at all is returned by base search (as happened in my real world case) it can be handled the normal way with hidden panel to display when no data returned.  Side note on this: I usually have a panel that displays when there is no data for the base search but there is some data in the index/sourcetype and a different panel when there is no data at all.  This is because on rare occasions you may have a problem with a forwarder or any number of other reasons resulting in events taking longer than expected to appear in an index.  Letting user know this is the case rather than assuming "all's good" is better in my view. In real world data it's ugly to manipulate source into vizualisation just to make it look right.  Sometimes we have to, but here I think the single-value vizualisation needs an option to let the user decide how to display missing or null values.  
I'm using Splunk Enterprise 8.2.4 and I would like to start getting my Windows Forwarder Estate (8.2.4) to send it's perform. Initially I thought this would be easy but I was wrong. I though that out... See more...
I'm using Splunk Enterprise 8.2.4 and I would like to start getting my Windows Forwarder Estate (8.2.4) to send it's perform. Initially I thought this would be easy but I was wrong. I though that out of the box that Splunk would allow me collect Windows perfmon data straight to a metrics index.  I think from reading the guide here that the pattern is as follows: Configure the forwarder inputs stanza as normal i.e. as you would to collect say the CPU metrics to an events index Point it at a metrics index tagged with a custom sourcetype Transform/parse the event to metrics format at the indexer when received based on sourcetype Is this understanding correct and of so does anyone have a bundle of Transforms ready to go (perhaps a TA or app that does this like Splunk Add-on for Microsoft Windows | Splunkbase )?
I have a Splunk On Call webhook that is using a POST request to send data to my index and sourcetype. Anytime a user enters a chat message for an incident, it will fire the webhook and data immediate... See more...
I have a Splunk On Call webhook that is using a POST request to send data to my index and sourcetype. Anytime a user enters a chat message for an incident, it will fire the webhook and data immediately gets added to that sourcetype. My issue: The raw events in the index and sourcetype show one event. However, when I table data, the values in each field gets duplicated with the same data as a multivalue field. Based on other Splunk Community questions, I've made some changes to the sourcetype settings: [mysourcetype] AUTO_KV_JSON = false INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Structured disabled = false pulldowntype = true This did not fix the issue like it has for others. I have tried creating sourcetypes a few different ways: 1. Going into Settings > Sourcetypes > selecting "New Source Type" and updating the settings. 2. Cloning _json sourcetype that Splunk has so I can keep the settings, but am still getting duplicate values when I table. 3 Going into Settings > Data Inputs > HTTP Event Collector > selecting "New Token" > creating a new sourcetype in "Input Settings"   I also noticed that the json events does not highlight syntax by default. Is this due to the KV_MODE being set to none? Can I set it to json without duplicating my data?
Hello all, In our company I need to create a daily email notification for  Remote logn Disabled account Event Log Stopped or Cleared Search Account Lockout  Please suggest which windows even... See more...
Hello all, In our company I need to create a daily email notification for  Remote logn Disabled account Event Log Stopped or Cleared Search Account Lockout  Please suggest which windows event correspond to above alerts. From my reading I think 4624, 4725, 1102, 4740 are the windows event IDs that I need to monitor but not sure. Thank you
I have a field whose value ranges from 0 to 20. I want to plot the graph to find the range of values being hit for the field every day. I tried using timechart but instead of it giving me ranges pe... See more...
I have a field whose value ranges from 0 to 20. I want to plot the graph to find the range of values being hit for the field every day. I tried using timechart but instead of it giving me ranges per day it starts building out graphs per value, like value 1 occurred on day1 ,day 2, day 4. I need it to tell me what all values occurred on a particular day rather than what days have those values.   index=a $search string$ | eval bytes=bytes/1000000 | timechart count by bytes   Hope I could explain what I am trying here..
I am running into an issue when I am trying to get a chart to populate with the data as I am expecting. I am running a search where the data is from IIS logs where it parsing out the referrer_stem ... See more...
I am running into an issue when I am trying to get a chart to populate with the data as I am expecting. I am running a search where the data is from IIS logs where it parsing out the referrer_stem  and then counting the total of each referrer_stem per month.  I am also splitting out the month field by both the shortname and numerical value (for testing each on the sort). this is the end portion of my search:     | eval date_month=strftime(_time, "%b") | eval number_month=strftime(_time, "%m") | chart count BY referrer_stem, date_month | sort 10 - count     The issue I am having is if I do this with date_month field then it shows columns or bars out of order (i.e. it shows as Feb Jan) where as if I do it by number_month it is correct (i.e. 01 02).  I want it to show in the correct order but using the month's shortname. I did try to use a case statement when using number_month but that doesn't work because after the chart command the field name seems to not exist (or I just don't know how to access the right name). Any help  or insight on this would be greatly appreciated.  
Hello Splunk community. I have a query that is running currently as shown below:   index=myIndex* api.metaData.pid="my_plugin_id" | rename api.p as apiName | chart count BY apiName "api.metaData.... See more...
Hello Splunk community. I have a query that is running currently as shown below:   index=myIndex* api.metaData.pid="my_plugin_id" | rename api.p as apiName | chart count BY apiName "api.metaData.status" | multikv forceheader=1 | table apiName success error NULL | eval line=printf("%-85s% 10s% 10s% 7s",apiName, success, error, NULL) | stats list(line) as line | eval headers=printf("%-85s% 10s% 10s% 7s","API Name","Success","Error", "NULL") | eval line=mvappend(headers,line) | fields - headers   Which displays a table with "API Name","Success","Error", "NULL" counts. This works as expected. Now i want to add a new column in the table which displays the latency value (tp95 and tp99) for each apiName . The time taken by each api is in the field api.metadata.tt. How can i achieve this ? I am new to splunk and I am literally stuck at this point. Could someone please help me. Thank you Info: Just to let you guys know, my query has these additional logic to format things because of related question here
Hello, I'm currently undergoing a sizing exercise to determine how large of a Splunk license I need, and was wondering if anyone could help. A quick background: I've got a trial license of Splunk... See more...
Hello, I'm currently undergoing a sizing exercise to determine how large of a Splunk license I need, and was wondering if anyone could help. A quick background: I've got a trial license of Splunk Enterprise running on-prem as a single instance deployment with the InfoSec app, and I am preparing to deploy Universal Forwarders to a select group of systems that will send security-related events and logs that I'd like to have Splunk ingest and index. My organization is currently not interested in having Splunk ingest operations-type data, and want to keep the scope of what Splunk ingests and indexes limited to just security-related events. I do have a specific list of sources, events and event IDs I want to include in the inputs.conf file, but the question I have is that, will my single instance filter out all events that are not in the inputs.conf whitelist, and then report to me how much data (in GB) was ultimately ingested based on the inputs.conf whitelist? Or would I need to spin up another server that runs Splunk as a Heavy Forwarder, have the UFs point to that, and reconfigure the original Splunk instance to become a indexing / search head server? It's important for me to get accurate data on how much Splunk ingests so that I can work with their sales team to get the most accurate pricing for how big of a Splunk license my organization actually needs. I'm familiar with Splunk's workload licensing model, but the initial costs I've been tasked with obtaining are for the ingestion model. Please let me know if you need any additional information. Thanks in advance for any help you can provide! Jason
Hello Splunkers, for a project I'm working on, I would need to store different IDs in a variable after evaluating them with if or case. The idea is to check several conditions and if one or more ar... See more...
Hello Splunkers, for a project I'm working on, I would need to store different IDs in a variable after evaluating them with if or case. The idea is to check several conditions and if one or more are met, update the value of the variable. Example: event1: A | B | C  event2: A | C | E event3: B | F | G Conditions: if A is present  -> ID01 if B is present  -> ID02 if B is present  -> ID03 Result: event1: ID01,ID02,ID03 event2: ID01,ID03 event3: ID02 I tried to concatenate the results but with no success: | makeresults | eval letter1="A", letter2="B", letter3="C" | append [| makeresults | eval letter1="A", letter2="C", letter3="E"] | append [| makeresults | eval letter1="B", letter2="F", letter3="G"] | eval ID="" | eval ID=ID.if(letter1="A" OR letter2="A" OR letter3="A","ID01",NULL) | eval ID=ID.if(letter1="B" OR letter2="B" OR letter3="B",",ID02",NULL) | eval ID=ID.if(letter1="C" OR letter2="C" OR letter3="C",",ID02",NULL) |table letter1 letter2 letter3 ID
Hi, I have different log types like: <SQL > <TID: 0000000050> <RPC ID: 0002424958> <Queue: List > <Client-RPC: 390620 > <USER: *** > <Overlay-Group: 1 > /* Fri Feb 04 2022 17:47:10.0461 */SELECT ... See more...
Hi, I have different log types like: <SQL > <TID: 0000000050> <RPC ID: 0002424958> <Queue: List > <Client-RPC: 390620 > <USER: *** > <Overlay-Group: 1 > /* Fri Feb 04 2022 17:47:10.0461 */SELECT * FROM ( SELECT T226.C1,C600000451 FROM T226 WHERE (('CC0000132482648' = T226.C600000451) AND ('7459898' = T226.C600000001)) ORDER BY 1 ASC ) WHERE ROWNUM <= 2 Or <SQL > <TID: 0000000056> <RPC ID: 0002424078> <Queue: Fast > <Client-RPC: 390620 > <USER: *** > <Overlay-Group: 1 > /* Fri Feb 04 2022 17:46:53.9515 */SELECT C999003082 FROM T226 WHERE C1 = 'CC0000272965790' I need to extract the CC* value, for example in this case CC0000132482648 (first log) and CC0000272965790 (second log). Thanks in advance!
Splunk forwarder is running in the host and sending the audit logs to Splunk instances through HEC. Now i want to send debug logs to another instance through another HEC end point. Is that possible t... See more...
Splunk forwarder is running in the host and sending the audit logs to Splunk instances through HEC. Now i want to send debug logs to another instance through another HEC end point. Is that possible to configure to HEC end points in Splunk forwarder?
Hello, I'm currently installing and configuring a new monitoring console. I've another instance which is License Master. I'm doing configuration with conf files. On my new MC, I've added in /op... See more...
Hello, I'm currently installing and configuring a new monitoring console. I've another instance which is License Master. I'm doing configuration with conf files. On my new MC, I've added in /opt/splunk/apps/my_app/local/server.conf [license] master_uri = https://xx.XX.XX.XX:8089 Flux are opened. Master_uri is up. On the new instance `Btool server list license --debug` shows only this file applyied : /opt/splunk/etc/system/default/server.conf I don't understand what I'm doing wrong. Any pointers ? Thanks a lot ! Ema
Hello, I am looking for some guidance please with regards to a CSV input I have that is automatically updated daily as part of the TA. I want to be able to extract data when a row has been update... See more...
Hello, I am looking for some guidance please with regards to a CSV input I have that is automatically updated daily as part of the TA. I want to be able to extract data when a row has been updated within the last 24 hours. The CSV for example has the following columns title, category, published_datetime I want to see the other values in the row when published_datetime is less than 24 hours, in UTC format. Thank you
My events are in json format. The  json path where my data is , is here   "alert.smtp-message.smtp-header" And with in "smtp-header", I have content like this,  from which I could use help in ex... See more...
My events are in json format. The  json path where my data is , is here   "alert.smtp-message.smtp-header" And with in "smtp-header", I have content like this,  from which I could use help in extracting some fields using rex. ============   "smtp-header": "Received: from mxdinx66.Gramyabnk.com (mxdinx66.Gramyabnk.com [159.45.78.215])\n\tby mn-svdc-epi-ran11.ist.Gramyabnk.net (Postfix) with ESMTP id 4JyJsN6m8kzVKnNg\n\tfor <tran.cu@Gramyabnk.com>; Mon, 14 Feb 2922 22:66:28 +9999 (UTC)\nReceived: from pps.filterd (mxdinx66.Gramyabnk.com [127.9.9.1])\n\tby mxdinx66.Gramyabnk.com (8.16.9.42/8.16.9.42) with SMTP id 21EMIuas425197\n\tfor <tran.cu@Gramyabnk.com>; Mon, 14 Feb 2922 22:66:28 GMT\nReceived: from mx9a-99994996.pphosted.com (mx9a-99994996.pphosted.com [295.229.165.191])\n\tby mxdinx66.Gramyabnk.com with ESMTP id 6e65wvawac-1\n\t(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA684 bits=256 verify=NOT)\n\tfor <tran.cu@Gramyabnk.com>; Mon, 14 Feb 2922 22:66:27 +9999\nReceived: from pps.filterd (m9216616.ppops.net [127.9.9.1])\n\tby mx9b-99994996.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21EIDxq8928666\n\tfor <tran.cu@Gramyabnk.com>; Mon, 14 Feb 2922 22:66:26 GMT\nAuthentication-Results: ppops.net;\n\tspf=pass smtp.mailfrom=info@efk.admin.ch;\n\tdkim=pass header.d=efk.admin.ch header.s=dkimkey1;\n\tdmarc=pass header.from=efk.admin.ch\nReceived: from mail11.admin.ch (mail11.admin.ch [162.26.62.11])\n\tby mx9b-99994996.pphosted.com (PPS) with ESMTPS id 6e625qnsf9-1\n\t(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA684 bits=256 verify=NOT)\n\tfor <tran.cu@Gramyabnk.com>; Mon, 14 Feb 2922 22:66:26 +9999\nDKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=efk.admin.ch; h=to\n\t:subject:date:to:from:reply-to:subject:message-id:mime-version\n\t:content-type:content-transfer-encoding; s=dkimkey1; bh=uoC6bt5q\n\thKVezRrk1ux9j7rGCMvkx/6cA9/rS1xbvwE=; b=V9mOEgc1tAyvbFpvkKFgHbnD\n\tHDh67iweoPEV7ZYCPpLW8KSBRU+uX+uL64xdJu9E1mp+BvITob98PRfIaCSIi6HC\n\tIf74+dtpxcVyfo9JXZmCj49tJdilXquYWoCu+OhLeONYd9/NMVs4S/IFHnYT/hmN\n\tNBzuP/5C6MKdlHavIwo=\nTo: \"Pretty Eloisa send you naughty videos https://vk.cc/cb5mIY\" <tran.cu@Gramyabnk.com>\nSubject: =?utf-8?Q?Pretty_Eloisa_send_you_naughty_videos_https://vk.cc/cb5mIY,_bitte?= =?utf-8?Q?_best=C6=A4tigen_Sie_ihre_EFK-Newsletter-Anmeldung?=\nDate: Mon, 14 Feb 2922 22:61:28 +9999\nTo: \"Pretty Eloisa send you naughty videos https://vk.cc/cb5mIY\" <tran.cu@Gramyabnk.com>\nFrom: \"Eidg. Finanzkontrolle\" <info@efk.admin.ch>\nReply-To: \"Eidg. Finanzkontrolle\" <info@efk.admin.ch>\nSubject: =?utf-8?Q?Pretty_Eloisa_send_you_naughty_videos_https://vk.cc/cb5mIY,_bitte?=\n =?utf-8?Q?_best=C6=A4tigen_Sie_ihre_EFK-Newsletter-Anmeldung?=\nMessage-ID: <MjQ1NzA5MwAC75229Y8BAMTY9NDg6Nzg4ODM6NzM@www.efk.admin.ch>\nContent-Type: multipart/alternative;\n\tboundary=\"b1_292f6ee91b9de8a92268de4c4ce5b57f\"\nX-TM-AS-GCONF: 99\nX-MSH-Id: E7195F2B6F624BA184EA6D9F12CD98AE\nContent-Transfer-Encoding: 7bit\nX-Proofpoint-GUID: 5sQWXU-CRjHoWtaxmd54Yn68A2IDf2Eu\nX-CLX-Shades: MLX\nX-Proofpoint-ORIG-GUID: 5sQWXU-CRjHoWtaxmd54Yn68A2IDf2Eu\nX-CLX-Response: 1TFkXGxgaEQpMehcaEQpZRBd6GF1SX9ZiBWNEcxEKWFgXbGdhYnBoGkBpaxo 7GxAHGRoRCnBsF6oeXwEBQkZDfXBTEAc ZGhEKcEwXZ1MfZ6t5RRkTE9AQGhEKbX4XGhEKWE9XSxEg\nMIME-Version: 1.9\nX-Brightmail-Tracker: True\nx-env-sender: info@efk.admin.ch\nX-Proofpoint-Virus-Version: vendor=nai engine=6699 definitions=19258 signatures=676461\nX-Proofpoint-Spam-Details: rule=inbound_aggressive_notspam policy=inbound_aggressive score=9\n clxscore=129 suspectscore=9 adultscore=9 bulkscore=9 mlxlogscore=472\n malwarescore=9 phishscore=9 spamscore=9 priorityscore=9 lowpriorityscore=9\n impostorscore=9 mlxscore=9 classifier=spam adjust=9 reason=mlx scancount=1\n engine=8.12.9-2291119999 definitions=main-2292149128",     ============================================== I just need the extraction of the fields present in the last 3 lines in bold. The values after the = sign , excluding the \n . clxscore suspectscore adultscore bulkscore mlgxscore malwarescore phishscore spamscore priorityscore owpriorityscore  impostorscore mlxscore classifier
Hi I am trying to onboard the streaming events from Salesforce into my Splunk and trying to use the 'Splunk Add-on for Salesforce Streaming API' for same. I have an http proxy at instance level to ... See more...
Hi I am trying to onboard the streaming events from Salesforce into my Splunk and trying to use the 'Splunk Add-on for Salesforce Streaming API' for same. I have an http proxy at instance level to allow the connect to the internet facing Salesforce Sandbox Instance. After setting up the required connection and inputs, the data is not getting onboarded. And I am getting following ERROR messages at ta_sfdc_streaming_api_sfdc_streaming_api_events.log My Splunk Version : 8.1.5 How to solve this?' ################## ERROR pid=434886 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/current/etc/apps/TA-sfdc-streaming-api/bin/sfdc_streaming_api_events.py", line 66, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/input_module_sfdc_streaming_api_events.py", line 26, in collect_events loop.run_until_complete(task) File "/opt/splunk/current/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete return future.result() File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/input_module_sfdc_streaming_api_events.py", line 61, in connect_sfdc async with sf_streaming_client as client: File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/exceptions.py", line 143, in async_wrapper return await func(*args, **kwargs) File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/client.py", line 246, in __aenter__ return cast("Client", await super().__aenter__()) File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiocometd/client.py", line 432, in __aenter__ await self.open() File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/exceptions.py", line 143, in async_wrapper return await func(*args, **kwargs) File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/client.py", line 143, in open await authenticator.authenticate() File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/auth.py", line 100, in authenticate status_code, response_data = await self._authenticate() File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiosfstream/auth.py", line 187, in _authenticate response = await session.post(self._token_url, data=data) File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiohttp/client.py", line 619, in _request break File "/opt/splunk/etc/apps/TA-sfdc-streaming-api/bin/../lib/aiohttp/helpers.py", line 656, in __exit__ raise asyncio.TimeoutError from None concurrent.futures._base.TimeoutError