All Topics

Top

All Topics

I have three tables. Each has one or more ID fields (out of ID_A, ID_B, ID_C) and assigns values Xn, Yn, Zn to these IDs. In effect, the tables each contain a fragment of information from a set of ob... See more...
I have three tables. Each has one or more ID fields (out of ID_A, ID_B, ID_C) and assigns values Xn, Yn, Zn to these IDs. In effect, the tables each contain a fragment of information from a set of objects 1...5. Table X: ID_A ID_B X1 X2 A1 B1 X1_1 X2_1 A2 B2 X1_2a X2_2 A2 B2 X1_2b X2_2 A3 B3 X1_3 X2_3 Table Y: ID_A ID_B Y1 Y2 A2 B2 Y1_2   A2 B2   Y2_2 A3 B3   Y2_3a A3 B3   Y2_3b A4 B4 Y1_4 Y2_4   Table Z: ID_B ID_C Z1 B1 C1 Z1_1 B3 C3 Z1_3 B5 C5 Z1_5 How can I create the superset of all three tables, i.e. reconstruct the "full picture" about obects 1..5 as good as possible? I tried with union and join in various ways, but I keep tripping over the following obstacles: The 1:n relation between ID and values (which should remain expanded as individual rows) Empty fields in between (bad for stats list(...) or stats values(...) because of different-sized MV results) There is no single table that has references to all objects (e.g. object 5 only present in table Z).   Desired result: ID_A ID_B ID_C X1 X2 Y1 Y2 Z1 A1 B1 C1 X1_1 X2_1     Z1_1 A2 B2   X1_2a X2_2 Y1_2 Y2_2   A2 B2   X1_2b X2_2 Y1_2 Y2_2   A3 B3   X1_3 X2_3   Y2_3a Z1_3 A3 B3   X1_3 X2_3   Y2_3b Z1_3 A4 B4       Y1_4 Y2_4     B5 C5         Z1_5 Sample data:   | makeresults | eval _raw="ID_A;ID_B;X1;X2 A1;B1;X1_1;X2_1 A2;B2;X1_2A;X2_2 A2;B2;X1_2B;X2_2 A3;B3;X1_3;X2_3 " | multikv forceheader=1 | table ID_A, ID_B, X1, X2 | append [ | makeresults | eval _raw="ID_A;ID_B;Y1;Y2 A2;B2;Y1_2; A2;B2;;Y2_2 A3;B3;Y1_3;Y2_3A A3;B3;Y1_3;Y2_3B A4;B4;Y1_4;Y2_4 " | multikv forceheader=1 | table ID_A, ID_B, Y1, Y2 ] | append [ | makeresults | eval _raw="ID_B;ID_C;Z1 B1;C1;Z1_1 B3;C3;Z1_3 B5;C5;Z1_5 " | multikv forceheader=1 | table ID_B, ID_C, Z1 ] | table ID_A, ID_B, ID_C, X1, X2, Y1, Y2, Z1    
Hi, I want to go through the splunk fundamentals 1 where I can get this link?  
Hello All,   I have created a dashboard and it is always showing no results found. But when i click on open in search or directly running the search query it is showing the results. can anyone he... See more...
Hello All,   I have created a dashboard and it is always showing no results found. But when i click on open in search or directly running the search query it is showing the results. can anyone help please. <form version="1.1" theme="light"> <label>Successful connections by an IP range dashboard</label> <search id="base_srch"> <query>index=prod sourcetype=auth_logs</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <fieldset submitButton="false"> <input type="time" token="time" searchWhenChanged="true"> <label>test</label> <default> <earliest>-4h@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <chart> <search base="base_srch"> <query>|stats count by ip</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> </form>
Statement: You install  1Password Events Reporting for Splunk from   https://splunkbase.splunk.com/app/5632  Problem: You get error messages after correctly configuring it in the _internal index l... See more...
Statement: You install  1Password Events Reporting for Splunk from   https://splunkbase.splunk.com/app/5632  Problem: You get error messages after correctly configuring it in the _internal index like:        03-26-2024 11:37:30.974 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/audit_events" 2024/03/26 11:37:30 [DEBUG] POST https://events.1password.com/api/v1/auditevents 03-26-2024 11:37:27.672 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/signin_attempts" 2024/03/26 11:37:27 [DEBUG] POST https://events.1password.com/api/v1/signinattempts 03-26-2024 11:37:23.259 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/item_usages" 2024/03/26 11:37:23 [DEBUG] POST https://events.1password.com/api/v1/itemusages 03-26-2024 11:37:20.561 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/audit_events" 2024/03/26 11:37:20 [DEBUG] POST https://events.1password.com/api/v1/auditevents 03-26-2024 11:37:17.440 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/signin_attempts" 2024/03/26 11:37:17 [DEBUG] POST https://events.1password.com/api/v1/signinattempts       How do you resolve this? The app was configured with a token, macros had indexes defined, interval for the scripted input set to a cron schedule.  Splunk 9.0.3 core standalone dev env. 
AIM: Integrate AppDynamics with a Kubernetes cluster using the provided documentation. Issue: I've set up a Kubernetes cluster and aimed to integrate it with AppDynamics for monitoring. Following t... See more...
AIM: Integrate AppDynamics with a Kubernetes cluster using the provided documentation. Issue: I've set up a Kubernetes cluster and aimed to integrate it with AppDynamics for monitoring. Following the provided documentation, I successfully created the cluster agent. However, encountered errors during logging and found that the cluster data isn't showing up in the AppDynamics interface. Reference :Install the Cluster Agent with the Kubernetes CLI  Logs and Findings:  PS C:\Users\SajoSam> kubectl logs k8s-cluster-agent-5f8977b869-bpf5v CA_PROPERTIES= -appdynamics.agent.accountName=myaccount -appdynamics.controller.hostName=mycontroller.saas.appdynamics.com -appdynamics.controller.port=8080 -appdynamics.controller.ssl.enabled=false -appdynamics.agent.monitoredNamespaces=default -appdynamics.agent.event.upload.interval=10 -appdynamics.docker.container.registration.interval=120 -appdynamics.agent.httpClient.timeout.interval=30 APPDYNAMICS_AGENT_CLUSTER_NAME=onepane-cluster [ERROR]: 2024-03-26 09:55:04 - secretconfig.go:68 - Problem With Getting /opt/appdynamics/cluster-agent/secret-volume/api-user Secret: open /opt/appdynamics/cluster-agent/secret-volume/api-user: no such file or directory [INFO]: 2024-03-26 09:55:04 - main.go:57 - check env variables and enable profiling if needed [INFO]: 2024-03-26 09:55:04 - agentprofiler.go:22 - Cluster Agent Profiling not enabled! [INFO]: 2024-03-26 09:55:04 - main.go:60 - Starting APPDYNAMICS CLUSTER AGENT version 24.2.0-317 [INFO]: 2024-03-26 09:55:04 - main.go:61 - Go lang version: go1.22.0 W0326 09:55:04.910967 7 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. [INFO]: 2024-03-26 09:55:04 - main.go:78 - Kubernetes version: v1.29.0 [INFO]: 2024-03-26 09:55:04 - main.go:233 - Registering cluster agent with controller host : mycontroller.saas.appdynamics.com controller port : 8080 account name : xxxxx [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:356 - Established connection to Kubernetes API [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:68 - Cluster name: onepane-cluster [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": context deadline exceeded (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:56:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": context deadline exceeded (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:57:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": dial tcp 35.84.229.250:8080: i/o timeout (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:58:04 - agentregistrationmodule.go:119 - Initial Agent registration Question : 1. What could be the root cause of the failure to access the secret file /opt/appdynamics/cluster-agent/secret-volume/api-user? 2. What could be causing the timeout error during the registration request to the AppDynamics controller? Could you help me with this? Thank you ^ Post edited by @Ryan.Paredez to redact account name and controller name. For privacy and security reasons, please do not share your Account name or Controller URL. 
Hi Splunk team, We have been using similar below Splunk query across 15+ Splunk alerts but the count mentioned in email shows 4 times of actual failure occurrence. index="<your_index>" sourcetype=... See more...
Hi Splunk team, We have been using similar below Splunk query across 15+ Splunk alerts but the count mentioned in email shows 4 times of actual failure occurrence. index="<your_index>" sourcetype="<your_sourcetype>" source="<your_source.log>" Business_App_ID=<your_appid> Object=* (Failure_Message=*0x01130006* OR Failure_Message=*0x01130009*) | stats count by Object, Failure_Message | sort count Below Splunk query is returning correct failure events. index="<your_index>" sourcetype="<your_sourcetype>" source="<your_source.log>" Business_App_ID=<your_appid> Object=* (Failure_Message=*0x01130006* OR Failure_Message=*0x01130009*) Can you please help in updating the Splunk query(mentioned 1st) to show correct count instead wrong one?
Good morning, I have started to ingest Palo Alto FW events and they are coming with a wrong timestamp, timestamp is 2 hour less than real time. I am going to show an example: This is a event in my... See more...
Good morning, I have started to ingest Palo Alto FW events and they are coming with a wrong timestamp, timestamp is 2 hour less than real time. I am going to show an example: This is a event in my SCP: My SCP is in Spain time (UTC+1), 11.06 right now. The events are coming wih timestamp of 9.06, although they are ingesting 11.06. PA server is in Mexico and timestamp in raw event is 4.06, 5 less hour. And heavy forwarder is also in mex but its hour is EDT time:   If i have explained me properly, how can i fix it?  
I have 3 different sources of the same filed. I want to aggregate all the 3 sources and get the distinct count of the field eg. sourcetype=source1 | stats dc(userlist) sourcetype=source2 | stats d... See more...
I have 3 different sources of the same filed. I want to aggregate all the 3 sources and get the distinct count of the field eg. sourcetype=source1 | stats dc(userlist) sourcetype=source2 | stats dc(line.userlist) sourcetype=source3 | stats dc(line.subject)   Here userlist, line.userlist, line.subject are all the same attributes but being logged differently. Now I want to get the dc of userlist+line.userlist+line.subject. Any help is appreciated.
Hi, I had couple of questions related to Splunk AI Assistant 1. Do we need to install Splunk AI Assistant on each Splunk Server that we using? 2. Can Splunk AI Assistant be called using API calls?... See more...
Hi, I had couple of questions related to Splunk AI Assistant 1. Do we need to install Splunk AI Assistant on each Splunk Server that we using? 2. Can Splunk AI Assistant be called using API calls? 3. Does Splunk AI Assistant provide SPL query or result of SPL query? 4. Based on users query, if there are multiple matches, will Splunk AI Assistant return all available SPL queries or the best match?
Good morning fellow Splunkthiasts! I have an index with 100k+ events per minute (all of them having the same sourcetype), approximately 100 fields are known in this dataset. Some of these events are... See more...
Good morning fellow Splunkthiasts! I have an index with 100k+ events per minute (all of them having the same sourcetype), approximately 100 fields are known in this dataset. Some of these events are duplicit, while others are unique. My aim is to understand the duplication and be able to explain what events exactly get duplicated. I am detecting duplicities using this SPL:   index="myindex" sourcetype="mysourcetype" | eventstats count AS duplicates BY _time, _raw   Now I need to identify what fields or their combination make the difference, under what circumstances the event is ingested twice. I tried to use predict command, however it is somehow producing new values for "duplicates" field, but it does not disclose the rule by which it makes the decision. In other words, I am not interested in prediction itself, I want to know the predictors. Is something like that possible in SPL?
Hello everyone,  I'm coming to you for advice. I am currently working with splunk to create monitor WSO2-APIM instances.  According to the WSO2-APIM documentation, logs are generated as follows :  ... See more...
Hello everyone,  I'm coming to you for advice. I am currently working with splunk to create monitor WSO2-APIM instances.  According to the WSO2-APIM documentation, logs are generated as follows :  [2019-12-12 17:30:08,091] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "GET /helloWorld/1.0.0 HTTP/1.1[\r][\n]" [2019-12-12 17:30:08,093] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "Host: localhost:8243[\r][\n]" [2019-12-12 17:30:08,094] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "User-Agent: curl/7.54.0[\r][\n]" [2019-12-12 17:30:08,095] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "accept: */*[\r][\n]" [2019-12-12 17:30:08,096] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "Authorization: Bearer 07f6b26d-0f8d-312a-8d38-797e054566cd[\r][\n]" [2019-12-12 17:30:08,097] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "[\r][\n]" [2019-12-12 17:30:08,105] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "GET /v2/5df22aa131000084009a30a9 HTTP/1.1[\r][\n]" [2019-12-12 17:30:08,106] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "accept: */*[\r][\n]" [2019-12-12 17:30:08,107] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "Host: www.mocky.io[\r][\n]" [2019-12-12 17:30:08,108] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "Connection: Keep-Alive[\r][\n]" [2019-12-12 17:30:08,109] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]" [2019-12-12 17:30:08,110] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "[\r][\n]" [2019-12-12 17:30:08,266] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "HTTP/1.1 200 OK[\r][\n]" [2019-12-12 17:30:08,268] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Server: Cowboy[\r][\n]" [2019-12-12 17:30:08,269] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Connection: keep-alive[\r][\n]" [2019-12-12 17:30:08,271] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" [2019-12-12 17:30:08,272] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Content-Type: application/json[\r][\n]" [2019-12-12 17:30:08,273] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Content-Length: 20[\r][\n]" [2019-12-12 17:30:08,274] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Via: 1.1 vegur[\r][\n]" [2019-12-12 17:30:08,275] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "[\r][\n]" [2019-12-12 17:30:08,276] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "{ "hello": "world" }" [2019-12-12 17:30:08,282] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "HTTP/1.1 200 OK[\r][\n]" [2019-12-12 17:30:08,283] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Expose-Headers: [\r][\n]" [2019-12-12 17:30:08,284] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Origin: *[\r][\n]" [2019-12-12 17:30:08,285] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Methods: GET[\r][\n]" [2019-12-12 17:30:08,286] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction,Authorization[\r][\n]" [2019-12-12 17:30:08,287] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Content-Type: application/json[\r][\n]" [2019-12-12 17:30:08,287] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Via: 1.1 vegur[\r][\n]" [2019-12-12 17:30:08,288] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" [2019-12-12 17:30:08,289] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Transfer-Encoding: chunked[\r][\n]" [2019-12-12 17:30:08,290] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "[\r][\n]" [2019-12-12 17:30:08,290] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "14[\r][\n]" [2019-12-12 17:30:08,291] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "{ "hello": "world" }[\r][\n]" [2019-12-12 17:30:08,292] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "0[\r][\n]" [2019-12-12 17:30:08,293] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "[\r][\n]" And also according to the doc :  DEBUG - wire >> Represents the message coming into the API Gateway from the wire. DEBUG - wire << Represents the message that goes to the wire from the API Gateway.   I use AWS Lambda to retrieve the WSO2-APIM logs, which are stored in AWS CloudWatch. I've just started using Splunk so I'm not very good at SPL. I would like Splunk to process events with SPL and then output something like this : Date, loglevel, action_https, correlationID, message, duration [2019-12-12 17:30:08,091], DEBUG, HTTPS-Listener, dispatcher-5, "GET /helloWorld/1.0.0 HTTP/1.1[\r][\n]" "Host: localhost:8243[\r][\n]" "User-Agent: curl/7.54.0[\r][\n]" "accept: */*[\r][\n]" "Authorization: Bearer 07f6b26d-0f8d-312a-8d38-797e054566cd[\r][\n]" "[\r][\n]", 006 [2019-12-12 17:30:08,105], DEBUG, HTTPS-Listener, dispatcher-1, "GET /v2/5df22aa131000084009a30a9 HTTP/1.1[\r][\n]" "accept: */*[\r][\n]" "Host: www.mocky.io[\r][\n]" "Connection: Keep-Alive[\r][\n]" "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]" "[\r][\n]", 005 [2019-12-12 17:30:08,266], DEBUG, HTTPS-Sender, dispatcher-1, "HTTP/1.1 200 OK[\r][\n]" "Server: Cowboy[\r][\n]" "Connection: keep-alive[\r][\n]" "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" "Content-Type: application/json[\r][\n]" "Content-Length: 20[\r][\n]" "Via: 1.1 vegur[\r][\n]" "[\r][\n]" "{ "hello": "world" }", 010 [2019-12-12 17:30:08,282], DEBUG, HTTPS-Listener, dispatcher-5, "HTTP/1.1 200 OK[\r][\n]" "Access-Control-Expose-Headers: [\r][\n]" "Access-Control-Allow-Origin: *[\r][\n]" "Access-Control-Allow-Methods: GET[\r][\n]" "Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction,Authorization[\r][\n]" "Content-Type: application/json[\r][\n]" "Via: 1.1 vegur[\r][\n]" "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" "Transfer-Encoding: chunked[\r][\n]" "[\r][\n]" "14[\r][\n]" "{ "hello": "world" }[\r][\n]" "0[\r][\n]" "[\r][\n]", 011 Do you have any ideas on how to do this with SPL in the Search App? Thank you for those who took the time to read and reply to me.
Hello, I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index... See more...
Hello, I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index, but I want them to be stored on the indexers. Can you help me? Thank you.
I need to call a custom function inside another custom function . How to implement it?
Hi all, I have faced a serious problem after upgrading indexers to 9.2.0.1! Occasionally, they stop data flow and sometimes are shown down on cluster master! I analyzed the problem and it shows thi... See more...
Hi all, I have faced a serious problem after upgrading indexers to 9.2.0.1! Occasionally, they stop data flow and sometimes are shown down on cluster master! I analyzed the problem and it shows this error occasionally:   Search peer indexer-1 has the following message: The index processor has paused data flow. Too many tsidx files in idx=main bucket="/opt/SplunkData/db/defaultdb/hot_v1_13320" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised.    It worked smooth with same load in lower versions! I think this is a bug in new version or some more configuration is needed! Finally, I rolled back to 9.1.3 and it now works perfectly.  
Hi SMEs, Seeking help on the below field extraction to capture hostname1, hostname2, hostname3 & hostname4   Mar 22 04:00:01 hostname1 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ;... See more...
Hi SMEs, Seeking help on the below field extraction to capture hostname1, hostname2, hostname3 & hostname4   Mar 22 04:00:01 hostname1 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-21 -e Mar\ 21 /var/log/secure Mar 22 04:00:01 hostname2 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-21 -e Mar\ 21 /var/log/secure 2024-03-21T23:59:31.143161+05:30 hostname3 caam: [INVENTORY|CaaM-14a669917c4a02f5|caam|e0ded6f4f97c17132995|Dummy-5|INFO|caam_inventory_controller] Fetching operationexecutions filtering with vn_id CaaM-3ade67652a6a02f5 and tenant caam 2024-03-23T04:00:17.664082+05:30 hostname4 sudo: root : TTY=unknown ; PWD=/home/caam/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-22 -e Mar\ 22 /var/log/secure.7.gz  
I understand that you want to visualize service limitations using a dashboard in Splunk, specifically related to the service details provided in the Splunk Cloud Platform Service Details documentatio... See more...
I understand that you want to visualize service limitations using a dashboard in Splunk, specifically related to the service details provided in the Splunk Cloud Platform Service Details documentation. Here’s some background information: There are numerous service limitations in areas such as bundle size and the number of SourceTypes. Some customers have experienced operational impacts due to these limitations, often without being aware of them. Regardless of the environment, Splunk users unknowingly carry the risk of affecting business operations. While documentation specifies the limitations for each service, the upper limits can vary based on the environment. Checking configuration files (such as conf files) for this information can be operationally burdensome, so you’d like to proactively visualize the settings and usage on a dashboard. Currently, you’re interested in visualizing the following four aspects and would like to retrieve limitation information using SPL (Search Processing Language): Are Limitations Set in Configuration Files? You want to know whether limitations are configured in files (such as conf files) or if this information is only accessible through REST or SPL commands. Regarding “Knowledge Bundle Replication Size”: If there’s an SPL query or other method to determine the size state (e.g., size over the past few days), please share it. IP Allow List Limitation: Is there a way to check the number of IP addresses registered in the IP Allow List (e.g., via REST)?   Splunkで下記リンクのサービス制限状況をダッシュボードで可視化したいと考えています。 Splunk Cloud Platform Service Details - Splunk Documentation ■背景 ・バンドルサイズやSourceType数など、経験がないとわからない領域でのサービス制限が多数ある ・これを知らずにSplunkを運用した結果、業務に影響を及ぼしたお客さんがいた ・環境に関わらず、Splunk利用者は知らず知らずのうちに業務影響をきたすリスクを抱えている ・ドキュメントにどのサービスにどれくらいの制限があるのかが記載されているが、上限は環境によって異なる。 ・Confファイルを見に行くのは運用負荷が高いので予め設定値と使用状況をダッシュボードで可視化したい   現時点では下記の4つを可視化したいと考えております。     しかし、ダッシュボード化するために、LimitationをSPLで取得したいと考えております。 【Question】 1. Limitationはconfファイル等に設定されているのでしょうか。restやSPL等のコマンドで取得できる ファイルに記載されているのか、Splunk社でしか取得できない情報なのかを知りたいです。 2.「Knowledge Bundle replication size」に関して、 サイズの状態(過去数日間のサイズ)がわかるSPL等あったら教えてください。 3.「IP Allow List」のLimitationに関して IP Allow Listに登録してあるIPアドレスの数を確認する方法はありますか?(restで取得できるかなど) お願いします。
Hello,  Log  : Mar 22 10:50:51 x.x.x.21 Mar 22 11:55:00 Device version -: [2024-03-22 11:54:12] Event : , IP : , MAC : , Desc :   Props : [host::x.x.x.21] CHARSET = utf8 TIME_PREFIX = \-:\s\[ ... See more...
Hello,  Log  : Mar 22 10:50:51 x.x.x.21 Mar 22 11:55:00 Device version -: [2024-03-22 11:54:12] Event : , IP : , MAC : , Desc :   Props : [host::x.x.x.21] CHARSET = utf8 TIME_PREFIX = \-:\s\[ TIME_FORMAT = %Y-%m-%d %H:%M:%S   When I check _time field, value is still 2021-03-22 10:50:51. I think Device's IP is x.x.x.21. So it seems that 21 is recognized as the year and I config props. But props is not working... Help me Thank you.  
I'm struggling to figure this one out. We have data coming in via an HEC endpoint that is JSON based, with the HEC endpoint setting sourcetype to _json.  This is splunk cloud. Minor bit of backgroun... See more...
I'm struggling to figure this one out. We have data coming in via an HEC endpoint that is JSON based, with the HEC endpoint setting sourcetype to _json.  This is splunk cloud. Minor bit of background on our data: All of the data we send to splunk has an "event" field, which is a number, that indicates a specific type of thing that happened in our system. There's one index where this data goes into with a 45d retention period. Some of this data we want to keep around longer, so we use collect to copy the data over for longer retention. We have a scheduled search that runs regularly that does an "index=ourIndex event IN (1,2,3,4,5,6) | collect index=longTerm output_format=hec" We use output_format=hec because without it the data isn't searchable: "index=longTerm event=3" never shows anything. There's a bunch of _raw, but that's it. Also, for the sake of completeness, this data is being sent by cribl. Our application normally logs CSV style data with the first 15 or so columns fixed in their meaning (everything has those common fields), the 16th column contains a description with parenthesis around a semicolon list of additional parameter/fields, where each additional CSV column has a value corresponding to that field name in that list. Sometimes that value is JSON data logged as a string. For the sake of not sending JSON data as a string in an actual JSON payload - we have cribl detect that, and expand that JSON field and construct it as a native part of the payload. So: 1,2024-03-01 00:00:00,user1,...12 other columns ...,User did something (didClick;details),1,{"where":"submit"%2c"page":"home"} gets sent to the HEC endpoint as: {"event":1,"_time":"2024-03-01 00:00:00","userID":"user1",... other stuff ..., "didClick":1,"details":{"where":"submit","page":"home"}} The data that ends up missing is always the extrapolated JSON data. Anything that seems to be part of the base JSON document always seems to be fine. Now, here's the weird part. If I run the search query that does the collect to ONLY look for a specific event and do a collect on that - things actually seem fine, data is never lost. When I introduce additional events that I want to do a collect on, some of those fields are missing for some, but not all of those events. The more events I add into the IN() clause, the more those fields go missing for those events that have extrapolated JSON in them. For each event that has missing fields, all extrapolated JSON fields are missing. When I've tried to use the _raw field, use spath on that, then pipe that to collect - that seems to work reliably, but also seems like an unnecessary hack. There are dozens of these events, so breaking them out into their own discreet searches isn't something I'm particularly keen on. Any ideas or suggestions?
Hi I have two sets of data, one is proxy logs (index=netproxy) and the other is an extract of LTE Logs which is logs every time the device joins. I'd like to cross reference the proxy logs with the... See more...
Hi I have two sets of data, one is proxy logs (index=netproxy) and the other is an extract of LTE Logs which is logs every time the device joins. I'd like to cross reference the proxy logs with the LTE data so I can extract the IMEI number but the IMEI number could exist in logs outside of the search time window. The below search works but only if the timeframe is big enough that it includes the device in the proxy logs. Is there a way I can maybe extend the earliest time for 24 hours prior to the search time window? I don't want to do "all time" on the subsearch because the IP Address allocations will change over time and then be matched against the wrong IMEI. index=netproxymobility sourcetype="zscalernss-web" | fields transactionsize responsesize requestsize urlcategory serverip ClientIP hostname appname appclass urlclass type=left ClientIP [ search index=netlte | dedup ClientIP | fields ClientIP IMEI ] thanks
Dear Splunkers,    My goal is to expose only some dashboards to external customer. Created a dedicated role and user with minimal access to a single app where these dashboards are placed. However, ... See more...
Dear Splunkers,    My goal is to expose only some dashboards to external customer. Created a dedicated role and user with minimal access to a single app where these dashboards are placed. However, I'm struggling with hiding Splunk bar/navigation menu. I.e. the customer can still use "find" window to search for some reports and dashboards he is not obliged to see. Could you please lead me on how to hide it?  The navigation menu looks like below:   <nav search_view="search"> <view name="search" /> <view name="datasets" /> <view hideSplunkBar="true" /> <view hideAppBar="true" /> <view hideChrome="true" /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" default='true'/> </nav>     regards, Sz