All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

If I have a transforms.conf like the below: [ORIGIN2] REGEX = (?:"id":"32605") FORMAT = sourcetype::test-2 DEST_KEY = MetaData:Sourcetype [aa] REGEX = . DEST_KEY = queue FORMAT = nullQueue [... See more...
If I have a transforms.conf like the below: [ORIGIN2] REGEX = (?:"id":"32605") FORMAT = sourcetype::test-2 DEST_KEY = MetaData:Sourcetype [aa] REGEX = . DEST_KEY = queue FORMAT = nullQueue [bb] REGEX =(?=.*successfully) DEST_KEY = queue FORMAT = indexQueue   and I call the props like the following: [test] TRANSFORMS-rename_sourcetype = ORIGIN2 SHOULD_LINEMERGE = false EVAL-ok = "ok" [aslaof:test-2] EVAL-action2 = "whatt" TRANSFORMS-eliminate_unwanted_data = aa,bb EVAL-action = "nooooo"   I cant seem to figure out why Im not allowed to perform a transform on my newly created sourcetype. Oddly, Splunk registers my 2 EVAL commands, but my transforms are not performed. Am I not allowed to perform transforms on a sourcetype I just created? Also tried combining the initial transform that creates the sourcetype into one piece: REGEX = (?=.*"id":"32605")(?=.*successfully), but this does not seem to work either.   
I deployed a Splunk AppDynamics on-premises and a Splunk Enterprise on-premises for a demo. And I have followed along with this tutorial to create the integration user How To - Enable Log Observer Co... See more...
I deployed a Splunk AppDynamics on-premises and a Splunk Enterprise on-premises for a demo. And I have followed along with this tutorial to create the integration user How To - Enable Log Observer Connect For Splunk AppDynamics and Splunk Core Integration - YouTube. I also tried following along with this one How to Deploy Log Observer Connect for AppDynamics - Splunk Community. But still no clue in achieving the integration.  However, after adding the username, password, and required parameters in Splunk AppDynamics, I get that  Error: The Splunk integration service is not available.   Any Ideas how to solve this issue?
I am deploying an On-Premises AppDynamics demo for a customer version (25.1.1.10031) and it is running on HTTP (8090). However, when I try to open it https://<ip_addr>:8181, I get the attached error ... See more...
I am deploying an On-Premises AppDynamics demo for a customer version (25.1.1.10031) and it is running on HTTP (8090). However, when I try to open it https://<ip_addr>:8181, I get the attached error message. SNI The screenshot has appd not the IP address just to hide it How do I bypass this error?
Dear Splunk community, I have a search in Splunk that generates results: index="myindex" message_id="AU2" | stats count by src | search count > 2 It basically searches the index for events of type... See more...
Dear Splunk community, I have a search in Splunk that generates results: index="myindex" message_id="AU2" | stats count by src | search count > 2 It basically searches the index for events of type "AU2" and shows an alert when they are greater than 2. I have created several alerts with different modes: Real-time Mode   Scheduled Mode   When I run: index=_internal sourcetype=scheduler savedsearch_name="PRUEBA Scheduled" It shows the following:   Could you tell me what I might be doing wrong or what I might be missing? Regards.  
Hello! I am using Dashboard Studio. I created an Events visualization that is currently in the List view. I want to make it into the Table view. This is the source code for the Dashboard Studio dashb... See more...
Hello! I am using Dashboard Studio. I created an Events visualization that is currently in the List view. I want to make it into the Table view. This is the source code for the Dashboard Studio dashboard:   { containerOptions: {}, "context": {}, "dataSources": { "primary": ds # the query is index=* | table field1, field2, field3 }, "options": {}, "showLastUpdated": false, "showProgressBar": true, "type": "splunk.events" }     I created something similar to what I'm looking for in Dashboard Classic and this is the code:   <panel> <event> <search> <query>index=*</query> <earliest>$global_time.earliest$</earliest> <latest>$global_time.latest$</latest> </search> <fields>field, field2, field3</fields> <option name="type">table</option> </event> </panel>  
I am creating a Classic Dashboard. I have a Events Panel that is in the Table format. The headers for the table are the following events: _time, host, source, and sourcetype. These are the "Selected ... See more...
I am creating a Classic Dashboard. I have a Events Panel that is in the Table format. The headers for the table are the following events: _time, host, source, and sourcetype. These are the "Selected Fields". However, there are other fields that I would like to include as "Selected" so that they show up on the header. Is there anyway to do that? <panel> <event> <search> <query>index=*</query> <earliest>$global_time.earliest$</earliest> <latest>$global_time.latest$</latest> </search> <option name="type">table</option> </event> </panel>  
I have migrated to 9.4.1.   I initially I had certificate issues, which have been resolved. kv store still fails to start however Outside the error below (Failed to connect to target host: ip-10-34-... See more...
I have migrated to 9.4.1.   I initially I had certificate issues, which have been resolved. kv store still fails to start however Outside the error below (Failed to connect to target host: ip-10-34-2-203:8191) there are    /opt/splunk/bin/splunk show kvstore-status --verbose WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. This member: backupRestoreStatus : Ready disabled : 0 featureCompatibilityVersion : An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: ip-10-34-2-203:8191] guid : 4059932D-D941-4186-BE08-6B6426B618CB port : 8191 standalone : 1 status : failed storageEngine : wiredTiger   mongodb.log   2025-03-11T15:46:17.377Z I CONTROL [initandlisten] MongoDB starting : pid=2570573 port=8191 dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo 64-bit host=ip-10-34-2-203 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] db version v4.2.25 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] git version: 41b59c2bfb5121e66f18cc3ef40055a1b5fb6c2e 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2zk-fips 3 Sep 2024 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] allocator: tcmalloc 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] modules: enterprise 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] build environment: 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] distmod: rhel70 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] distarch: x86_64 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] target_arch: x86_64 2025-03-11T15:46:17.377Z I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, tls: { CAFile: "opt/splunk/etc/auth/cacert.pem", allowConnectionsWithoutCertificates: true, allowInvalidHostnames: true, certificateKeyFile: "/opt/splunk/etc/auth/server.pem", certificateKeyFilePassword: "<password>", disabledProtocols: "noTLS1_0,noTLS1_1", mode: "requireTLS", tlsCipherConfig: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RS..." }, unixDomainSocket: { enabled: false } }, replication: { oplogSizeMB: 200, replSet: "4059932D-D941-4186-BE08-6B6426B618CB" }, security: { clusterAuthMode: "sendX509", javascriptEnabled: false, keyFile: "/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0", oplogFetcherSteadyStateMaxFetcherRestarts: "0" }, storage: { dbPath: "/opt/splunk/var/lib/splunk/kvstore/mongo", engine: "wiredTiger", wiredTiger: { engineConfig: { cacheSizeGB: 2.25 } } }, systemLog: { timeStampFormat: "iso8601-utc" } } 2025-03-11T15:46:19.083Z I CONTROL [initandlisten] ** WARNING: This server will not perform X.509 hostname validation 2025-03-11T15:46:19.083Z I CONTROL [initandlisten] ** This may allow your server to make or accept connections to 2025-03-11T15:46:19.083Z I CONTROL [initandlisten] ** untrusted parties 2025-03-11T15:46:19.102Z I REPL [initandlisten] Rollback ID is 1 2025-03-11T15:46:19.103Z I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset 2025-03-11T15:46:19.122Z I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured 2025-03-11T15:46:19.129Z I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist 2025-03-11T15:46:19.135Z I NETWORK [listener] Listening on 0.0.0.0 2025-03-11T15:46:19.135Z I NETWORK [listener] waiting for connections on port 8191 ssl 2025-03-11T15:46:19.298Z I NETWORK [listener] connection accepted from 10.34.2.203:56880 #1 (1 connection now open) 2025-03-11T15:46:19.300Z I NETWORK [conn1] end connection 10.34.2.203:56880 (0 connections now open)     server.conf   [general] pass4SymmKey = serverName = splunk1 [sslConfig] serverCert = /opt/splunk/etc/auth/server.pem sslRootCAPath = opt/splunk/etc/auth/cacert.pem enableSplunkdSSL = true sslVersions = tls1.2 sslPassword = <yada yada yada> [kvstore] storageEngine = wiredTiger serverCert = /opt/splunk/etc/auth/server.pem sslRootCAPath = opt/splunk/etc/auth/cacert.pem sslVerifyServerCert = true sslVerifyServerName = true sslPassword = <yada yada yada>     serverd.log When i grep for hostname   root@ip-10-34-2-203:~# grep ip-10-34-2-203 /opt/splunk/var/log/splunk/splunkd.log 03-11-2025 12:48:48.418 +0000 INFO ServerConfig [0 MainThread] - My hostname is "ip-10-34-2-203". 03-11-2025 12:48:48.466 +0000 INFO loader [2492128 MainThread] - System info: Linux, ip-10-34-2-203, 5.15.0-1077-aws, #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:54 UTC 2025, x86_64. 03-11-2025 12:49:01.958 +0000 INFO PubSubSvr [2492128 MainThread] - Subscribed: channel=deploymentServer/phoneHome/default connectionId=connection_127.0.0.1_8089_ip-10-34-2-203_direct_ds_default listener=0x7f2a306bfa00 03-11-2025 12:49:01.958 +0000 INFO PubSubSvr [2492128 MainThread] - Subscribed: channel=deploymentServer/phoneHome/default connectionId=connection_127.0.0.1_8089_ip-10-34-2-203_direct_ds_default listener=0x7f2a306bfa00 03-11-2025 12:49:01.958 +0000 INFO PubSubSvr [2492128 MainThread] - Subscribed: channel=deploymentServer/phoneHome/default/metrics connectionId=connection_127.0.0.1_8089_ip-10-34-2-203_direct_ds_default listener=0x7f2a306bfa00 03-11-2025 12:49:01.958 +0000 INFO PubSubSvr [2492128 MainThread] - Subscribed: channel=tenantService/handshake connectionId=connection_127.0.0.1_8089_ip-10-34-2-203_direct_tenantService listener=0x7f2a306bfc00 03-11-2025 13:44:36.801 +0000 ERROR KVStorageProvider [2493368 TcpChannelThread] - An error occurred during the last operation ('collectionStats', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: ip-10-34-2-203:8191] 03-11-2025 13:44:36.801 +0000 ERROR CollectionConfigurationProvider [2493368 TcpChannelThread] - Failed to get collection stats for collection="era_email_notification_switch" with error: No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: ip-10-34-2-203:8191] 03-11-2025 14:03:17.838 +0000 ERROR KVStorageProvider [2493425 TcpChannelThread] - An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: ip-10-34-2-203:8191] 03-11-2025 14:03:17.842 +0000 ERROR KVStorageProvider [2493425 TcpChannelThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on 'ip-10-34-2-203:8191']     These are other errors I noticed that might be related   03-11-2025 14:37:31.298 +0000 ERROR X509Verify [2538813 ApplicationUpdateThread] - Server X509 certificate (CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1,O=DigiCert Inc,C=US) failed validation; error=20, reason="unable to get local issuer certificate" 03-11-2025 14:37:31.298 +0000 WARN SSLCommon [2538813 ApplicationUpdateThread] - Received fatal SSL3 alert. ssl_state='error', alert_description='unknown CA'. 03-11-2025 14:37:31.298 +0000 WARN HttpClientRequest [2538813 ApplicationUpdateThread] - Returning error HTTP/1.1 502 Error connecting: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. 03-11-2025 14:37:31.298 +0000 ERROR ApplicationUpdater [2538813 ApplicationUpdateThread] - Error checking for update, URL=https://apps.splunk.com/api/apps:resolve/checkforupgrade: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. 03-11-2025 14:38:57.211 +0000 ERROR KVStoreConfigurationProvider [2536490 KVStoreConfigurationThread] - Failed to start mongod on first attempt reason=Failed to receive response from kvstore error=, service not ready after waiting for timeout=301389ms 03-11-2025 14:38:57.211 +0000 ERROR KVStoreConfigurationProvider [2536490 KVStoreConfigurationThread] - Could not start mongo instance. Initialization failed. 03-11-2025 14:38:57.211 +0000 WARN KVStoreConfigurationProvider [2536490 KVStoreConfigurationThread] - Action scheduled, but event loop is not ready yet 03-11-2025 14:38:57.211 +0000 ERROR KVStoreBulletinBoardManager [2536490 KVStoreConfigurationThread] - KV Store changed status to failed. Failed to start KV Store process. See mongod.log and splunkd.log for details.. 03-11-2025 14:38:57.211 +0000 ERROR KVStoreBulletinBoardManager [2536490 KVStoreConfigurationThread] - Failed to start KV Store process. See mongod.log and splunkd.log for details. 03-11-2025 14:38:57.211 +0000 INFO KVStoreConfigurationProvider [2536490 KVStoreConfigurationThread] - Mongod service shutting down  
I am trying to find a way to compare the results listed in a table to each other.  Basically the table lists the results of many different test where some test have the same names, but have been run... See more...
I am trying to find a way to compare the results listed in a table to each other.  Basically the table lists the results of many different test where some test have the same names, but have been run and rerun so they have same test_names but different test_IDs. something like this test_name test_id Score Drift test 1 .98 100   test 1 .99 98 -2 test 1 1.00 100 2 test 2  .01 30   test 3 0.54 34   test 3 0.55 76 42    I am looking for a way to take the score from line one and have some sort of logic that will look at the result of the next line and if the test has the sane test_name BUT a different test_ID it will subtract the first lines score from the second lines score and continue along until the (for example) the next line has a different test_name and it skips the line until it finds another line where the following line have the same test_name. and it continues on until all scores are compared.  The delta command almost works but I need to have the ability to say BY test_name  something like      | delta Score as Drift by test_name     unfortunately delta doesn't accept by clauses I am trying to find a way to calculate the drift column using Splunk so I can create a detection where the drift exceeds a specific threshold.   
I have been having some trouble with Generic KPI setup in splunk ITSI I have a query that returns data in the form of Channel       Count Channel1    1000 Channel2     800 Channel3     1200  a... See more...
I have been having some trouble with Generic KPI setup in splunk ITSI I have a query that returns data in the form of Channel       Count Channel1    1000 Channel2     800 Channel3     1200  and so on So I wanted to setup a KPI that runs this query with the alert value being sum of all the "Count", heres how I configured it. I enabled a 7 day backfill, I dont have any split by entity rules I am able to see the alert value is being captured in the generated search from the KPI builder. But i am unable to see any KPI data or values being captured even when I let it sit for a while. please help me with the setup. TIA
I want to use the splunk app for lookup file editing to export a csv lookup file and automatically apply utf8-bom encoding when opening the downloaded file on my local PC. How can I do this? I wen... See more...
I want to use the splunk app for lookup file editing to export a csv lookup file and automatically apply utf8-bom encoding when opening the downloaded file on my local PC. How can I do this? I went into that path and modified lookup_editor_rest_handler.py and added a utf8-BOM stream to csv_data, but that setting is not applied. When the user opens the exported Excel file on a local PC, a problem occurs where non-English characters are encoded incorrectly. $SPLUNK_HOME/etc/apps/lookup_editor/bin/lookup_editor_rest_handler.py     import codecs def post_lookup_as_file(self, request_info, lookup_file=None, namespace="lookup_editor", owner=None, lookup_type='csv', **kwargs): self.logger.info("Exporting lookup, namespace=%s, lookup=%s, type=%s, owner=%s", namespace, lookup_file, lookup_type, owner) try: # If we are getting the CSV, then just pipe the file to the user if lookup_type == "csv": with self.lookup_editor.get_lookup(request_info.session_key, lookup_file, namespace, owner) as csv_file_handle: csv_data = csv_file_handle.read() csv_data = codecs.BOM_UTF8.decode('utf-8')+csv_data # If we are getting a KV store lookup, then convert it to a CSV file else: rows = self.lookup_editor.get_kv_lookup(request_info.session_key, lookup_file, namespace, owner) csv_data = shortcuts.convert_array_to_csv(rows) return { 'payload': csv_data, # Payload of the request. 'status': 200, # HTTP status code 'headers': { 'Content-Type': 'text/csv; charset=UTF-8', 'Content-Disposition': f'attachment; filename*=UTF-8\'\'{lookup_file}' }, } except (IOError, ResourceNotFound): return self.render_error_json("Unable to find the lookup", 404) except (AuthorizationFailed, PermissionDeniedException): return self.render_error_json("You do not have permission to perform this operation", 403) except Exception as e: self.logger.exception("Export lookup: details=%s", e) return self.render_error_json("Something went wrong!")      
I want to get total memory allocated on 1 indexer and how much memory it is using. so that i could get remaining disk space left.
Can someone help create an equivalent query to the following, without using subsearch? There are probably too many results and the query does not complete.   index=my_index  [search index=my_index... See more...
Can someone help create an equivalent query to the following, without using subsearch? There are probably too many results and the query does not complete.   index=my_index  [search index=my_index ("Extracted entities" AND "'date': None") OR extracted_entities.date=null | stats count by entity_id | fields entity_id | format] "Check something" | timechart count by classification   Basically I want to extract the list of entity_ids from this search: [search index=my_index ("Extracted entities" AND "'date': None") OR extracted_entities.date=null] where dates are null and then use those IDs to correlate in a second search "Check something" which has a field "classification", and then I want to do a timechart on the result to see a line graph of events where a date was missing from an event, plus with a given classification.
I am trying to identify the user or process responsible for stopping the Splunk UF agent. What log source do I require to be able to see this. I have unsuccessfully tried: Searching in internal ... See more...
I am trying to identify the user or process responsible for stopping the Splunk UF agent. What log source do I require to be able to see this. I have unsuccessfully tried: Searching in internal index - You can only see the service going down.  index=_internal sourcetype=splunkd host="DC*" component=Shutdown* Monitoring the windows system event log for forwarder shutdown event (EventCode 7036 ) No visibility on who performed the action. Looking for ideas on how this can be achieve from Splunk.
Hi, I have a python modular input that populates an index (index_name). This ran into some gateway error issues causing some data to be missing in the index. Is it possible to ingest a JSON file c... See more...
Hi, I have a python modular input that populates an index (index_name). This ran into some gateway error issues causing some data to be missing in the index. Is it possible to ingest a JSON file containing the missing data directly into the index (index_name)?   Thanks, 
I am trying to figure out the best way to perform this search. I have some json log/events where the event data is slightly different based on the type of fruit (this is just an example). I have two ... See more...
I am trying to figure out the best way to perform this search. I have some json log/events where the event data is slightly different based on the type of fruit (this is just an example). I have two searches that return each thing that I want. I'm not sure if it is best to try and combine the two searches or if there is a better way all together.  Here is an example of my event data:   Event Type 1 { "data": { "fruit": { "common": { "type": "apple", "foo": "bar1" }, "apple": { "color": "red", "size": "medium", "smell": "sweet" } } } } Event Type 2 { "data": { "fruit": { "common": { "type": "pear", "foo": "bar2" }, "pear": { "color": "green", "size": "medium", "taste": "sweet" } } } }     I want to extract all of the "color" values from all of the log/json messages. I have two separate queries that extract each one but I want them in a single table. Here are my current queries/searches: index=main | spath "data.pear.color" | search "data.pear.color"=* | eval fruitColor='data.pear.color' | table _time, fruitColor index=main | spath "data.apple.color" | search "data.apple.color"=* | eval fruitColor='data.apple.color' | table _time, fruitColor I know that there must be a way to do something with the 'type' field to do what I want but can't seem to figure it out. Any suggestion is appreciated.    
Description: I am using a Splunk Heavy Forwarder (HF) to forward logs to an indexer cluster. I need to configure props.conf and transforms.conf on the HF to drop all logs that originate from a speci... See more...
Description: I am using a Splunk Heavy Forwarder (HF) to forward logs to an indexer cluster. I need to configure props.conf and transforms.conf on the HF to drop all logs that originate from a specific directory and any of its subdirectories, without modifying the configuration each time a new subdirectory is created. Scenario: The logs I want to discard are located under /var/log/apple/. This directory contains dynamically created subdirectories, such as: /var/log/apple/nginx/ /var/log/apple/db/intro/ /var/log/apple/some/other/depth/ New subdirectories are added frequently, and I cannot manually update the configuration every time. Attempted Solution: I configured props.conf as follows: [source::/var/log/apple(/.*)?] TRANSFORMS-null=discard_apple_logs And in transforms.conf: [discard_apple_logs] REGEX = . DEST_KEY = queue FORMAT = nullQueue However, this does not seem to work, as logs from the subdirectories are still being forwarded to the indexers. Question: What is the correct way to configure props.conf and transforms.conf to drop all logs under /var/log/apple/, including those from any newly created subdirectories? How can I ensure that this rule applies recursively without explicitly listing multiple wildcard patterns? Any guidance would be greatly appreciated!
Hello everyone.  I'm dealing with a query that deals with certain "tickets" and "events", but some of them are duplicates, that's why it runs a dedup command. But there seems to be something else ... See more...
Hello everyone.  I'm dealing with a query that deals with certain "tickets" and "events", but some of them are duplicates, that's why it runs a dedup command. But there seems to be something else happening. The query is of the form: index=main source=... ... ... | fillnull value="[empty]" | search tickets=*** | dedup tickets | stats count by name, tickets | stats sum(count) as numOfTickets by name ... | fields name, tickets, count Listing all the events, I'm able to see that the, basically, the main duplicate events are the ones that were null and were filled with "[empty]". But, for some reason, some of the events disappear with dedup. In theory, dedup should remove all duplicates and maintain one, representing all of its "copies". And that happens for some "names", but not for all. During the same query, I deal with events of the category "name1" and events of the category "name2". All of theirs instances are "[empty]", and running dedup removes all instances of "name1" and maintains one of "name2", when it should maintain one of both.  Why is that happening? Each instance is of the form " processTime | arrivalTime | name | tickets | count"  
We are trying to on-board Akamai logs to Splunk. Installed the add-on. Here it is asking for proxy server and proxy host. I am not sure what these means? Our splunk instances are hosted on AWS and in... See more...
We are trying to on-board Akamai logs to Splunk. Installed the add-on. Here it is asking for proxy server and proxy host. I am not sure what these means? Our splunk instances are hosted on AWS and instances are refreshed every 45 days due to compliance and these are not exposed to internet (internal). Spoke with internal team and they said to use Sidecar Proxy on our splunk instances hosted on AWS. How to create and configure sidecar proxy server here? Please guide me.  This is the app installed - https://splunkbase.splunk.com/app/4310
My goal is to run AppDynamics in the context of a PHP application using an Alpine container. I am using the official image php:8.2-fpm-alpine which can be seen here https://hub.docker.com/layers/lib... See more...
My goal is to run AppDynamics in the context of a PHP application using an Alpine container. I am using the official image php:8.2-fpm-alpine which can be seen here https://hub.docker.com/layers/library/php/8.2-fpm-alpine/images/sha256-fbe14883e5e295fb5ce3b28376fafc8830bb9d29077340000121003550b84748 On the appdynamics side, I am using the archive above which was the latest to be found in the download area appdynamics-php-agent-x64-linux-24.11.0.1340.tar.bz2 I was able to successfully install the PHP agent thanks to the install script from the archive   appdynamics-php-agent-linux_x64/install.sh   However, when running command "php -m", I get this message   Warning: PHP Startup: Unable to load dynamic library 'appdynamics_agent.so' (tried: /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so (Error loading shared library libstdc++.so.6: No such file or directory (needed by /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so)), /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so.so (Error loading shared library /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so.so: No such file or directory)) in Unknown on line 0   I tried various ways to install but then run into other problems   RUN apk add --no-cache \ gcompat \ libstdc++   Which leads to   Warning: PHP Startup: Unable to load dynamic library 'appdynamics_agent.so' (tried: /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so (Error relocating /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so: __vsnprintf_chk: symbol not found), /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so.so (Error loading shared library /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so.so: No such file or directory)) in Unknown on line 0   What could be wrong? I don't see much help in the documentation regarding appdynamic in the context of an alpine container.
Hi everyone. I have a query that basically filters certain events and sums them by category. But I'm facing issues when dealing with stats sum. The query is of the form   index=main source=..... See more...
Hi everyone. I have a query that basically filters certain events and sums them by category. But I'm facing issues when dealing with stats sum. The query is of the form   index=main source=... ... ... | stats count BY name, ticket | stats sum(count) as numOfTickets by name     Using some test data, removing the last line gives me a table with only one row of the form: " name    | tickets               | count " " name1 | ticket_name1 | 1" (considering the first line as the names of the fields). Whenever I run the last line, that is, "stats sum(count)..." , it returns 0 events.  I've already tried to, for example, redundantly check that count is a numeric value by doing "eval count = tonumber(count)". Why is this happening? Thank you in advance