All Topics

Top

All Topics

Was just going through the ‘Masa diagrams’ link: https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 If you look at the "Detai... See more...
Was just going through the ‘Masa diagrams’ link: https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 If you look at the "Detail Diagram - Standalone Splunk" , the queues are laid out like this (one example): (persistentQueue) + udp_queue --> parsingQueue --> aggQueue --> typingQueue --> indexQueue  So lets say we have UDP input configured and some congestion occurs in typingQueue, the persistentQueue should still be able to hold the data until the congestion is cleared up. This should be able to prevent the data loss.. right?  Sorry for this loaded assumption based question.. I am trying to figure out what we can do to stop UDP input data from getting dropped due to the typingQueue being filled. (P.S. adding an extra HF is not an option right now).   Thanks in advance!
I recently download VT4Splunk and everything was working fine with our API Key then a few days later I received a warning to enter the API key. However, when I entered the key back in I received the ... See more...
I recently download VT4Splunk and everything was working fine with our API Key then a few days later I received a warning to enter the API key. However, when I entered the key back in I received the following error message back, “Unexpected error when Validating VirusTotal API Key: 'ta_virustotal_app_settings' We currently have Splunk Cloud 9.0.2303.201 and VT4Splunk 1.6.2   Any assistance you all can provided will be greatly appreciated!
I have multiple strings as below in various log files. Intention is to retrieve them in a table and apply group by. Satisfied Conditions: XYZ, ABC, 123, abc Satisfied Conditions: XYZ, bcd, 123, abc... See more...
I have multiple strings as below in various log files. Intention is to retrieve them in a table and apply group by. Satisfied Conditions: XYZ, ABC, 123, abc Satisfied Conditions: XYZ, bcd, 123, abc Satisfied Conditions: bcd, ABC, 123, abc Satisfied Conditions: XYZ, ABC, 456, abc then output shall be: Condition Count XYZ 3 ABC 3 abc 4 bcd 2 123 3 456 1   I am almost there till retrieving data column wise but not able to get it. Any inputs here would be helpful.
The ODBC driver to enable PowerBI to connect with Splunk on SplunkBase is only the Mac OS version. Can the Windows version be made available?
I upgraded my SE from 7.2.4 to 8.2.8 afterwards I upgraded my apps and addon as per compatibility. but some addons stopped working and solarwinds addon is one of them. I am getting below errors: ... See more...
I upgraded my SE from 7.2.4 to 8.2.8 afterwards I upgraded my apps and addon as per compatibility. but some addons stopped working and solarwinds addon is one of them. I am getting below errors: 10-26-2023 18:10:04.720 +0000 ERROR AdminManagerExternal [20948 TcpChannelThread] - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/handler.py", line 117, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/handler.py", line 179, in all\n **query\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 289, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 71, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 679, in get\n response = self.http.get(path, all_headers, **query)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 1183, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 1244, in request\n raise HTTPError(response)\nsolnlib.packages.splunklib.binding.HTTPError: HTTP 404 Not Found -- {"messages":[{"type":"ERROR","text":"Not Found"}]}\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 150, in init\n hand.execute(info)\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 636, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunk_aoblib/rest_migration.py", line 39, in handleList\n AdminExternalHandler.handleList(self, confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/admin_external.py", line 40, in wrapper\n for entity in result:\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/handler.py", line 122, in wrapper\n raise RestError(exc.status, str(exc))\nsplunktaucclib.rest_handler.error.RestError: REST Error [404]: Not Found -- HTTP 404 Not Found -- {"messages":[{"type":"ERROR","text":"Not Found"}]}\n 10-20-2023 18:45:23.444 +0000 ERROR ModularInputs [15755 MainThread] - Unable to initialize modular input "solwarwinds_query" defined in the app "Splunk_TA_SolarWinds": Introspecting scheme=solwarwinds_query: script running failed (PID 15889 exited with code 1).   10-20-2023 18:45:23.443 +0000 ERROR ModularInputs [15755 MainThread] - <stderr> Introspecting scheme=solwarwinds_query: File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/core/cacerts/ca_certs_locater.py", line 59, in _fallback    
while im checking that univearal forwareder will support to which windows os editions there is no such kind of documents
Hello   please, I want to know if there is a way to display legends in the calendar heatmap application directly without requiring a mouseover on the rectangles (circles).
hi i am windows user  i am trying to install universal forwarders in ubuntu i am a windows user can anyone share like to download and steps please
With nearly 12,000 students  — including 60% from abroad — the London School of Economics and Political Science (LSE) has to ensure seamless online enrollment services across time zones with 24/7 a... See more...
With nearly 12,000 students  — including 60% from abroad — the London School of Economics and Political Science (LSE) has to ensure seamless online enrollment services across time zones with 24/7 availability.  Join our webinar for insights into how LSE leverages full-stack observability to deliver the digital experience its global student body expects.  London School of Economics & Cisco AppDynamics: Enhancing user experience in complex environments AMER: November 8 at 11 a.m. PST / 2 p.m. EST APAC: November 8 at 8:30 a.m. IST / 11 a.m. SGT / 1 p.m. AEST EMEA: November 8 at 10 a.m. BST / 11 a.m. CEST You’ll learn:  The challenges LSE faced when dealing with open-source software.  Why (and how) LSE began down the road toward full-stack observability.  How Cisco AppDynamics provides deep tech stack insights to improve app performance.  Register now! Get real-world insights from the team at LSE to shape your journey to full-stack observability.  Speakers Samuel Dovey, Regional Sales Manager, Cisco AppDynamics Derek Alexander, Senior Software Developer, London School of Economics and Political Science
Metric streaming, a method that employs Kinesis Data Firehose Stream for the delivery of metrics, is an advanced alternative to traditional metric polling, which may exhibit a latency of 5-10 minutes... See more...
Metric streaming, a method that employs Kinesis Data Firehose Stream for the delivery of metrics, is an advanced alternative to traditional metric polling, which may exhibit a latency of 5-10 minutes. This highly scalable and efficient approach ensures that, once set up, near real-time metrics start flowing in just 1-2 minutes.  We've enhanced the ways in which you can set up metric streams directed to Splunk Observability Cloud. Previously, enabling metric streams was limited to our API and the setup required invoking CloudFormation templates which create required infrastructure on AWS and granting Cloudwatch stream permissions that could let us create and manage metric streams for you. As those streams’ lifecycles would be bound to Splunk integrations, they are now called "Splunk-managed streaming". We've now incorporated the Splunk-managed streaming option within the UI guided setup alongside the polling method. Concurrently, with the introduction of Quick AWS Partner Setup in the CloudWatch console, we've integrated an AWS-managed streaming capability, enabling users to efficiently manage metric streams via Amazon CloudWatch. You can find how to set up the AWS-managed metric streams here. A detailed comparison of these options is also available in our documentation.  Screenshot of Quick AWS Partner setup in Amazon CloudWatch If you have previously enabled the metric streaming option via API (which will now be called “Splunk-managed streaming”), rest assured that your integrations will continue to function as usual. A possible alteration you may observe relates to those specific integrations where an access token has been assigned to Kinesis Firehose via AWS CloudFormation templates while no token or a different one was assigned to the integration in Splunk Console, you might notice a shift in the token usage metrics. This is due to our transition towards exclusively utilizing the token configured on the AWS side. Consequently, this also streamlines the guided setup process for AWS integrations with streaming options, eliminating the need to select an access token. For current insights into which metric streams on AWS and integrations on Splunk exhibit token discrepancies, you could create a new chart for the metric “sf.org.awsMetricStreamsTokenDifference” and get all the details in the “Data table” view. Depending on your chosen streaming method, filtering metric streams can be executed using namespaces and metric names, either within Splunk Observability Cloud or the AWS CloudWatch console. It's important to note that, when transitioning from a polling setup with existing filters, some filters may not be compatible with streaming configurations. Specifically, functionalities such as resource tag filtering and advanced filtering mechanisms are not supported in metric streaming integrations.  We have also expanded coverage for the Cloud Metric Metadata Sync service that collects resource group tags and entity properties from AWS APIs. This expansion mainly targets the metrics from Amazon Keyspaces with the namespace “AWS/Cassandra”, which offers a convenient and scalable way to run Cassandra databases. The corresponding metadata items are automatically collected as property metadata for all new and existing metrics related to this namespace. To ensure all of your metadata is collected, please verify that your AWS IAM permissions for the integration include all prescribed permissions from here. Additionally, our dedication to user-centric improvements continues with detailed status visibility options for AWS integrations. You can now view specific states for Optimizer, Metric Polling, Metric Streaming, and Log Streaming (tailored for Log Observer customers). You can find it straightforward in the UI to pinpoint the exact state of integrations, making it easier to identify and address issues like stopping and cleaning streaming. We are thrilled to announce that this release will first be available in selected realms, ensuring a smooth rollout. Following this initial phase, we will offer general availability to all our users on November 6, 2023.
I am trying to configure Splunk to read the aide.log file, which file(s) do I need to modify in Splunkforwarder  to get it to read the aide.log file.
Hi Team, I am using below query: [search index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced succes... See more...
Hi Team, I am using below query: [search index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True="✔" | bin _time span=1d | dedup _time | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True I want if there are no events than message "ebnc event balanced successfully" should not get displayed. Can someone guide me on that
Can someone suggest which type of storage is best for Splunk Cluster ? Is it Block storage or Object Storage.
 I want to extract the below contractWithCustomers and  contracts  using rex named as entity .  For ID 1349c1f4-989c-4ea5-94ca-25fc40f6aab8 -flow started put:\contractWithCustomers:application\json:... See more...
 I want to extract the below contractWithCustomers and  contracts  using rex named as entity .  For ID 1349c1f4-989c-4ea5-94ca-25fc40f6aab8 -flow started put:\contractWithCustomers:application\json:bmw-crm-wh-xl-cms-api-config For ID 1697108895 -flow started put:\contracts:application\json:bmw-crm-wh-xl-cms-api-config    
Hello, I am trying to make report which will display what notables were closed with what disposition. But unfortunately when I make report, it shows me values as follows: "disposition:1", "dispositi... See more...
Hello, I am trying to make report which will display what notables were closed with what disposition. But unfortunately when I make report, it shows me values as follows: "disposition:1", "disposition:2" and so on and I cant figure out how to change these values in the way that in chart/graph it will show "false positive" or "true positive". I found out a way to change name of column (rename as) but I cant find a way to change values itself and if I try to use same logic (rename disposition:1 as false positive) it doesnt make the trick. Could you point me in the correct direction, please? Thanks in advance
Hi All, I have created below query: search index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "TRIM\.CNX(CTR)?\.(?<TRIM_ID>\w+)" ... See more...
Hi All, I have created below query: search index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "TRIM\.CNX(CTR)?\.(?<TRIM_ID>\w+)" | transaction TRIM_ID startswith="Reading Control-File /absin/TRIM.CNXCTR." endswith="Completed Settlement file processing, TRIM.CNX." |eval StartTime=min(_time)|eval EndTime=StartTime+duration|eval duration_min=floor(duration/60) |rename duration_min as TRIM.CNX_Duration| table StartTime EndTime TRIM.CNX_Duration| sort +StartTime +EndTime]| fieldformat ProcessingStartTime = strftime(ProcessingStartTime, "%F %T.%3N")| fieldformat ProcessingEndTime = strftime(ProcessingEndTime, "%F %T.%3N")| table starttime EndTime I am not getting the correct time I am getting in below format: start time - 1697809010.604 EndTime - 1697809075.170 I want it in this format: StartTime - 2023-10-20 02:16:56.629 EndTime - 2023-10-20 02:19:57.554 Can someone help me here.  
Splunk Enterprise 9.0.5.1 Hello! I have to calculate the delta between two timestamps that have nanosecond granularity.  According to Splunk documentation nanoseconds are supported with either %9... See more...
Splunk Enterprise 9.0.5.1 Hello! I have to calculate the delta between two timestamps that have nanosecond granularity.  According to Splunk documentation nanoseconds are supported with either %9N or %9Q: https://docs.splunk.com/Documentation/Splunk/9.0.5/SearchReference/Commontimeformatvariables When I try to parse a timestamp with nanosecond granularity, however, it stops at microseconds and calculates the delta in microseconds as well.  My expectation is that Splunk should maintain and manage nanoseconds. Here is a run anywhere:       | makeresults | eval start = "2023-10-24T18:09:24.900883123" | eval end = "2023-10-24T18:09:24.902185512" | eval start_epoch = strptime(start,"%Y-%m-%dT%H:%M:%S.%9N") | eval end_epoch = strptime(end,"%Y-%m-%dT%H:%M:%S.%9N") | table start end start* end* | eval delta = end_epoch - start_epoch | eval delta_round = round(end_epoch - start_epoch,9)       Is this a defect or am I doing something wrong? Thank you! Andrew
I am trying to setup a dashboard which gives me details like user's current concurrency settings & roles utilization , if someone has implemented this kind of dashboard please help
When I call: https://api.{REALM}.signalfx.com/v1/timeserieswindow with my access token as header: X-SF-TOKEN I receive: { "message": "API Error: 400", "status": 400, "type": "error" }   ... See more...
When I call: https://api.{REALM}.signalfx.com/v1/timeserieswindow with my access token as header: X-SF-TOKEN I receive: { "message": "API Error: 400", "status": 400, "type": "error" }   The same happens when I add parameters to request: https://api.{REALM}.signalfx.com/v1/timeserieswindow?query=sf_metric:"jvm.cpu.load"&startMs=1489410900000&endMs=1489411205000   Am I missing something?
Absolute imports: from utils import get_log Relative imports: from .utils import get_log This import line is in  splunk/etc/apps/my_app/bin/myapp.py path of utils                   splunk/etc/... See more...
Absolute imports: from utils import get_log Relative imports: from .utils import get_log This import line is in  splunk/etc/apps/my_app/bin/myapp.py path of utils                   splunk/etc/apps/my_app/bin/utils.py