All Topics

Top

All Topics

Dear community, it might be an odd question but i need to forward the splunkd.log to a foreign syslog server, therefore i was following the sample from here: https://docs.splunk.com/Documentation/... See more...
Dear community, it might be an odd question but i need to forward the splunkd.log to a foreign syslog server, therefore i was following the sample from here: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Forwarding/Forwarddatatothird-partysystemsd So far i have configured the forwarder to forward testing.log (should be splunkd.log later) to the foreign syslog target     #inputs.conf [monitor:///opt/splunk/var/log/splunk/testing.log] disabled=false sourcetype=testing         #outputs.conf [tcpout] defaultGroup=idx-cluster indexAndForward=false [tcpout:idx-cluster] server=splunk-idx-cluster-indexer-service:9997 [syslog:my_syslog_group] server = my-syslog-server.foo:514       #transforms.conf [send_to_syslog] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group     So far so good, testing.log appears on the syslog server but not just that, all other messages are forwarded too. Question: How can i configure the (heavy) forwarder to only send testing.log to the foreign syslog server and how can i make sure that testing.log does not getting indexed? In other words - testing.log should only be send to syslog. Many thanks in advance    
Splunk docs show all deployment components needing a minimum of x64, 12 cores, 12GB, 2GHZ My question is for a dedicated license server for a VERY small distributed system for training and developme... See more...
Splunk docs show all deployment components needing a minimum of x64, 12 cores, 12GB, 2GHZ My question is for a dedicated license server for a VERY small distributed system for training and development. I want a search head, and indexer and then separate LM, and DS.  The data volume is small, less than 2GB/day. Do I really need the full blown minimums for an LM that will have a single Dev License?  I wanted to put this onto an RPi, but ...... yeh ..... doesn't look like an option. I have a couple of low end NUC's that will be x64, but won't meet the minimums for cores or RAM. Would welcome any assistance or even mentoring on this project.
Hi, How can I combine a field value , if the other 3 field values are the same Ex:- If the field1 , field2 , field3 are same but the field4 is different and its creating a new row in my splunk ta... See more...
Hi, How can I combine a field value , if the other 3 field values are the same Ex:- If the field1 , field2 , field3 are same but the field4 is different and its creating a new row in my splunk table, I want to merge or combine the field4 values into one field value separated by commas if the field1 , field2 , field3 are same  
Hi Team,  I am facing the below error while testing in my local SPLUNK web v9 while connecting with Chronicle Instance. [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed cer... See more...
Hi Team,  I am facing the below error while testing in my local SPLUNK web v9 while connecting with Chronicle Instance. [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106) I have created a python app to upload it in Splunk.  Have created a request_fn where below line of code is being executed - requests.get(host + url, verify=False, **kwargs) I made sure that SSL verification is disabled in Python code (above verify=False) and also I have disabled it from splunk settings - Server Settings > General > Https SSL set to NO  Enable SSL (HTTPS) in Splunk Web? - NO   also Have checked the webconf file where SSL is set to 0 (no) [settings] enableSplunkWebSSL = 0 But still when my SPLUNK LOCAL WEB is trying to make the http request it is giving SSL error -  [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106) Does anybody has any clue or faced the same issue ?
Hi All, Hope you all are doing well. I am very new to Splunk Enterprise security, and i need your help  to understand how i can create a reverse integration with ServiceNow. So we are using ... See more...
Hi All, Hope you all are doing well. I am very new to Splunk Enterprise security, and i need your help  to understand how i can create a reverse integration with ServiceNow. So we are using ServiceNow Security Operation Integration to manually create incidents in ServiceNow for notables. We have a new ask from SOC to update the notables when the incidents are being created and closed in ServiceNow. We are using Splunk enterprise and wanted to know what endpoints we need to provide so that we can achieve reverse communication. I have created a user in splunk who has access to edit notables but i am not sure what endpoint i need to provide, is it just the url of my instance or do i need to add any services as well. Please let me know if you have any other questions. Thanks in advance.
Hello Splunk Community, I'm encountering a problem with the component from '@splunk/visualizations/Line' in my Splunk dashboard framework. I am trying to set up an event to be triggered when a user ... See more...
Hello Splunk Community, I'm encountering a problem with the component from '@splunk/visualizations/Line' in my Splunk dashboard framework. I am trying to set up an event to be triggered when a user clicks on a point in the line chart. Despite using the 'point.click' event, it doesn't seem to work as expected. Has anyone faced a similar issue or can anyone suggest what might be going wrong here? Any guidance or examples would be greatly appreciated. Thanks in advance for your help!
Hello Splunk Community, I'm encountering a problem with the component from '@splunk/visualizations/Line' in my Splunk dashboard framework. I am trying to set up an event to be triggered when a user... See more...
Hello Splunk Community, I'm encountering a problem with the component from '@splunk/visualizations/Line' in my Splunk dashboard framework. I am trying to set up an event to be triggered when a user clicks on a point in the line chart. Despite using the 'point.click' event, it doesn't seem to work as expected. Has anyone faced a similar issue or can anyone suggest what might be going wrong here? Any guidance or examples would be greatly appreciated. Thanks in advance for your help!   Here is the relevant part of my code: import React, { useEffect, useState} from 'react'; import Line from '@splunk/visualizations/Line'; const MemoryUtilizationLine = () => { const handleEvent = (e)=>{ console.log(e) } return <div className=' m-2 pie-border-style'> <Line pointClick ={handleEvent} options={{}} dataSources={{ primary: { requestParams: { offset: 0, count: 20 }, data: { fields: [ { name: '_time', }, { name: 'count', type_special: 'count', }, { name: 'percent', type_special: 'percent', }, ], columns: [ [ '2018-05-02T18:10:46.000-07:00', '2018-05-02T18:11:47.000-07:00', '2018-05-02T18:12:48.000-07:00', '2018-05-02T18:13:49.000-07:00', '2018-05-02T18:15:50.000-07:00', ], ['600', '525', '295', '213', '122', '19'], ['87.966380', '50.381304', '60.023780', '121.183272', '70.250513', '90.194752'], ], }, meta: { totalCount: 20 }, }, }} />
Hi, My team (Team1) has a cluster of indexers and a search head cluster. We want to add a dedicated a search head to Team 2 where they can be admin. A few conditions and restrictions: - Team 1 sho... See more...
Hi, My team (Team1) has a cluster of indexers and a search head cluster. We want to add a dedicated a search head to Team 2 where they can be admin. A few conditions and restrictions: - Team 1 should remain admins of the cluster but not of the dedicated search head. - Team 2 should not be able to search certain indexes nor change that setting by any means. In short, there are a few indexes which we do not want Team 2 to see nor tamper the settings to get access to, but we would like them to be admins of their own search head. any suggestions?      
下記の事項について、ご存じの方が居られましたら、 お手数をお掛け致しますが、ご教授お願い致します。 やりたい事 ーーーーーーーーーー 特定の日付を選択後、 Splunk画面に表示されている複数のレポート(カード)内のグラフが 選択した日付のデータのみ表示するようフィルタを掛けたい 詰まっている事・知りたい事 ーーーーーーーーーー Spunk画面上で、特定の日付を選択させる方... See more...
下記の事項について、ご存じの方が居られましたら、 お手数をお掛け致しますが、ご教授お願い致します。 やりたい事 ーーーーーーーーーー 特定の日付を選択後、 Splunk画面に表示されている複数のレポート(カード)内のグラフが 選択した日付のデータのみ表示するようフィルタを掛けたい 詰まっている事・知りたい事 ーーーーーーーーーー Spunk画面上で、特定の日付を選択させる方法が分からない。 日付や日時の範囲選択させる入力・選択ボックスを実装配置できる機能があることは分かったのですが、 シンプルにカレンダーから1つの日付を選択してグラフをフィルタさせるといった実装方法が知りたいです。jQueryなどコーディングが必要になるのでしょうか。 お手数をお掛け致しますが、ご教授お願い致します。
As the title suggests, I want to change the CSS style of a table within Splunk dashboard using classes instead of id. The reason is I have multiple tables with different values BUT applying a similar... See more...
As the title suggests, I want to change the CSS style of a table within Splunk dashboard using classes instead of id. The reason is I have multiple tables with different values BUT applying a similar style. If I want to make changes or create a new table with similar style, I have to keep iterating the id (e.g. tableid_10) which is impractical. I have inspected element and cannot change the Splunk default class "panel-element-row" as this will affect other tables on my dashboard.  e.g. for panel below the css works fine if I use the id as a selector. <panel> <table id="test"> <search> <query>index="test" | eval hide="Hide" | rename hide as " "</query> <earliest>0</earliest> <latest></latest> </search> <option name="drilldown">none</option> </table> </panel> With the following css #test th{ color:#808080 !important; border: 1px solid white !important; } However, if I switch it to using class selector, <panel> <table class="test"> <search> <query>index="test" | eval hide="Hide" | rename hide as " "</query> <earliest>0</earliest> <latest></latest> </search> <option name="drilldown">none</option> </table> </panel> With the following css .test th{ color:#808080 !important; border: 1px solid white !important; } It no longer works.
Hello members,   i'm facing an issue with index clustering and indexers peers one of peers has addingbatch status and after a while he goes up then return to batchadding   other peer is going up ... See more...
Hello members,   i'm facing an issue with index clustering and indexers peers one of peers has addingbatch status and after a while he goes up then return to batchadding   other peer is going up and after while pending then going up again   i can't figure out the problem why this occur can any one help...   this picture shows the problem      
こんにちは Splunkのオブザーバビリティプラットフォームでブラウザテスト用の多要素認証シナリオを設定できないかと考えていました。 たとえば、時間ベースのワンタイムパスワード(TOTP)を使用する場合は、秘密鍵またはQRコードを生成してテスト環境に設定します。 秘密鍵またはQRコードをグローバル変数として設定します。 これにより、テスト中に認証コードを自動的に生成できます。 Data... See more...
こんにちは Splunkのオブザーバビリティプラットフォームでブラウザテスト用の多要素認証シナリオを設定できないかと考えていました。 たとえば、時間ベースのワンタイムパスワード(TOTP)を使用する場合は、秘密鍵またはQRコードを生成してテスト環境に設定します。 秘密鍵またはQRコードをグローバル変数として設定します。 これにより、テスト中に認証コードを自動的に生成できます。 Datadog 製品を使用する場合、グローバル変数を作成して秘密鍵を入力したり、認証プロバイダーから QR コードをアップロードしたりできます。 SPLUNK製品を使用する場合、認証プロバイダーから秘密鍵を入力したり、QRコードをアップロードしたりするためのグローバル変数を作成できますか?
Hi Community, How can I access a TI provider's API from Splunk Cloud if the provider has whitelisted IPs but Splunk Cloud's IP is not static?  
Hello guys, I am quite new on the topic so I really need tyour help ^_^. I am ingesting Zscaler logs in a Splunk Cloud instance using a HeavyForwarder and TCP Inputs. As for AUTH logs the volume ... See more...
Hello guys, I am quite new on the topic so I really need tyour help ^_^. I am ingesting Zscaler logs in a Splunk Cloud instance using a HeavyForwarder and TCP Inputs. As for AUTH logs the volume is huge, we want to filter logs by limiting logs on following conditions: if one user is logging in one application today, all following logs for this user logging in that application in this specific day (month/date/year) would be discarded and we would start the ingesting next day using the same conditions. I hope this is pretty clear. I know that this can be done in prop.conf and transform.conf but I am not sure on how I should build the string. Thank you in advance. 
Trying to use splunkcloud, I get The connection has timed out An error occurred during a connection to prd-p-xauy6.splunkcloud.com. Seems to be an SSL cert error because of strict checking. Is the... See more...
Trying to use splunkcloud, I get The connection has timed out An error occurred during a connection to prd-p-xauy6.splunkcloud.com. Seems to be an SSL cert error because of strict checking. Is there a solution?
AppDynamics Cluster Agent allows you to auto-instrument your Applications running on Kubernetes. The auto-instrumentation injects APM agents on runtime, modifying your deployment spec with an ini... See more...
AppDynamics Cluster Agent allows you to auto-instrument your Applications running on Kubernetes. The auto-instrumentation injects APM agents on runtime, modifying your deployment spec with an init-container of AppDynamics APM agent. You can use different strategies to target Kubernetes deployments, StatefulSet, or DeploymentConfigs. In this article, we will cover instrumenting one deployment by using a label defined on the Deployment level (it will be the same for DeploymentConfig and StatefulSet) Let's take this forward with two sample deployments running in namespace abhi-java-apps-second. My first deployment: --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-app-abhi labels: app: tomcat-app-abhi-second namespace: abhi-java-apps-second spec: replicas: 1 selector: matchLabels: app: tomcat-app-abhi template: metadata: labels: app: tomcat-app-abhi spec: containers: - name: tomcat-app-abhi #image: docker.io/abhimanyubajaj98/java-tomcat-sample-app-buildx:latest image: docker.io/abhimanyubajaj98/java-application:latest imagePullPolicy: Always ports: - containerPort: 8080 env: - name: JAVA_TOOL_OPTIONS value: -Xmx512m #- name: APPDYNAMICS_AGENT_UNIQUE_HOST_ID # value: $(cat /proc/self/cgroup | head -1 | awk -F '/' '{print $NF}' | cut -c 16-27) --- apiVersion: v1 kind: Service metadata: name: tomcat-app-service labels: app: tomcat-app-abhi namespace: abhi-java-apps-second spec: ports: - port: 8080 targetPort: 8080 selector: app: tomcat-app-abhi My Second deployment: --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-app-abhi-labelmatch labels: app: tomcat-app-abhi-second-labelmatch namespace: abhi-java-apps-second spec: replicas: 1 selector: matchLabels: app: tomcat-app-abhi-labelmatch template: metadata: labels: app: tomcat-app-abhi-labelmatch spec: containers: - name: tomcat-app-abhi #image: docker.io/abhimanyubajaj98/java-tomcat-sample-app-buildx:latest image: docker.io/abhimanyubajaj98/java-application:latest imagePullPolicy: Always ports: - containerPort: 8080 env: - name: JAVA_TOOL_OPTIONS value: -Xmx512m #- name: APPDYNAMICS_AGENT_UNIQUE_HOST_ID # value: $(cat /proc/self/cgroup | head -1 | awk -F '/' '{print $NF}' | cut -c 16-27) --- apiVersion: v1 kind: Service metadata: name: tomcat-app-service-labelmatch labels: app: tomcat-app-abhi-labelmatch namespace: abhi-java-apps-second spec: ports: - port: 8080 targetPort: 8080 selector: app: tomcat-app-abhi-labelmatch Now, my use case is only for instrument deployment tomcat-app-abhi-labelmatch. To do this, I would need to edit my cluster-agent.yaml and add the below specs: instrumentationRules: - namespaceRegex: abhi-java-apps-second labelMatch: - app: tomcat-app-abhi-second-labelmatch tierName: abhiapps language: java imageInfo: image: "docker.io/appdynamics/java-agent:latest" agentMountPath: /opt/appdynamics imagePullPolicy: Always Now, after the deployment is done, only the deployment tomcat-app-abhi-labelmatch will have AppDynamics Java Agent.
Splunk add-on for Google Cloud Platform How to add logs/new Input to have Kubernetes Pod Status?   What are the steps? How to add new Input to have Kubernetes Pod Status(highlight below GCP ... See more...
Splunk add-on for Google Cloud Platform How to add logs/new Input to have Kubernetes Pod Status?   What are the steps? How to add new Input to have Kubernetes Pod Status(highlight below GCP picture of Pods) into Splunk?  
Hello Community!  We are excited to announce the availability of Log Observer Connect for AppDynamics, a new integration that combines the power of Cisco AppDynamics and Splunk to make troubleshoot... See more...
Hello Community!  We are excited to announce the availability of Log Observer Connect for AppDynamics, a new integration that combines the power of Cisco AppDynamics and Splunk to make troubleshooting application performance issues faster and more efficient.  With this integration, you can quickly find the right logs in Splunk while maintaining troubleshooting context from AppDynamics, centralize logs across teams, and reduce storage costs.   Key capabilities include single sign-on for seamless integration, deep linking for contextual log analysis, and log enrichment for faster searches. Best of all, this enhancement is available at no additional cost for customers with licenses for both AppDynamics (SaaS and self-managed) and Splunk (Cloud or Enterprise). Additional log ingestion volumes may incur extra costs.  To learn more, check out our community Knowledge Base Article: How to Deploy Log Observer Connect for AppDynamics  Jump into more of the details in the Log Observer Connect data sheet. 
Hello, I have a table with several fields that I display in a dashboard. One column is from violation_details field, which contains XML data. Note that I don't want to parse anything from this fie... See more...
Hello, I have a table with several fields that I display in a dashboard. One column is from violation_details field, which contains XML data. Note that I don't want to parse anything from this field, because depending on the violations the tags won't be the same. Here is an example of a value for this field   <?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>58f7c3e96a0c279b-7e3f5f28b0000040</block><alarm>5cf7c3e97b0c6fdb-7e3f5f28b0000040</alarm><learn>5cf2c1e9730c2f5b-3d3c000830000000</learn><staging>0-0</staging></violation_masks><response_violations><violation><viol_index>56</viol_index><viol_name>VIOL_HTTP_RESPONSE_STATUS</viol_name><response_code>500</response_code></violation></response_violations></BAD_MSG>   How could I make this more readable like this :   <?xml version='1.0' encoding='UTF-8'?> <BAD_MSG> <violation_masks> <block>58f7c3e96a0c279b-7e3f5f28b0000040</block> <alarm>5cf7c3e97b0c6fdb-7e3f5f28b0000040</alarm> <learn>5cf2c1e9730c2f5b-3d3c000830000000</learn> <staging>0-0</staging> </violation_masks> <response_violations> <violation> <viol_index>56</viol_index> <viol_name>VIOL_HTTP_RESPONSE_STATUS</viol_name> <response_code>500</response_code> </violation> </response_violations> </BAD_MSG>   I've seen this POST XML-to-display-in-a-proper-format-with-tag but it seems to use a deprecated method. Is there a better way ?
How to check if apps/add-on running in splunk cloud which are dependent on python versions < 3.9?