All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I need to monitor some critical devices (stored in a lookup file) connected to the Crowdstrike console, in particular if they will be disconnected from it. We receive one event every 2 hours ... See more...
Hello, I need to monitor some critical devices (stored in a lookup file) connected to the Crowdstrike console, in particular if they will be disconnected from it. We receive one event every 2 hours for each device from Crowdstrike device json input in Splunk, so basically if after 2 hours there is not the new event, the alert should trigger reporting the hostname. Has anyone some idea for implementing this?
Hi. I am new to splunk. I have configured everything. I am trying to solve this issue for 2 days. I have universal forwerder on the ubuntu server with different network. I have downloaded splunk ent... See more...
Hi. I am new to splunk. I have configured everything. I am trying to solve this issue for 2 days. I have universal forwerder on the ubuntu server with different network. I have downloaded splunk enterprise to my windows 10 computer. My port 9997 is enabled. Firewall is disabled. Even with zyxel interface i bypassed the port 9997. My splunk is listening on port 9997. The thing is with telnet from any other source to my computer (i tried with both my mobile internet and UF client) is still getting denied. How should i proceed to make it work. Im stuck so bad Thanks for your helps this is the mobile internet test with Test-NetConnections to my pc (splunk server i guess) ComputerName : x.x.x.x <desired.connection> RemoteAddress : x.x.x.x <desired connection> RemotePort : 9997 InterfaceAlias : Wi-Fi SourceAddress : X.x.x.x <my ip> PingSucceeded : False PingReplyDetails (RTT) : 0 ms TcpTestSucceeded : False  
We have a pre-production environment which is totally separate to our production environment. A number of our dashboards utilise hyperlinks back into source systems to view complete records, and part... See more...
We have a pre-production environment which is totally separate to our production environment. A number of our dashboards utilise hyperlinks back into source systems to view complete records, and part of the urls are different due to IP addresses on the two environemnts. What I would like to do is be able to set a global setting so that we can utilise the same dashboards without having to recode the link when we migrate from pre-produciton to production. I have tried to use a marco but as these are limited to search it did not effect the link target, is there any other way that we can do this without having to put in steps in migration to ammend the URLs? Thanks, Steven
Hey guys I have a question about CSS in splunk, I want to move these displays like this: And I can't use dashboard studio because I need some special things that dahsboard studio doesn't have ye... See more...
Hey guys I have a question about CSS in splunk, I want to move these displays like this: And I can't use dashboard studio because I need some special things that dahsboard studio doesn't have yet, like depends functionality, I'm trying with css grid but I couldn't find a way to do this yet
I just posted a question that was immediately rejected. How do I get it approved please?
I use the OpenTelemetry Java agent to monitor FusionAuth in one Docker container, and send the output to the Splunk OpenTelemetry Docker container in Gateway mode.   Here's a diagram of my system a... See more...
I use the OpenTelemetry Java agent to monitor FusionAuth in one Docker container, and send the output to the Splunk OpenTelemetry Docker container in Gateway mode.   Here's a diagram of my system architecture:   ```mermaid graph LR subgraph I[Your server] direction LR subgraph G[Docker] H[(Postgresql)] end subgraph C[Docker] direction BT D(OpenTelemetry for Java) --> A(FusionAuth) end subgraph E[Docker] B(Splunk OpenTelemetry collector) end end C --> G C --> B E --> F(Splunk web server) style I fill:#111 ```   The Splunk container runs correctly and exports sample data to Splunk Observability Cloud. I can see it in the dashboard.   FusionAuth and the Java agent run correctly. But the Otel sender cannot send to the Otel collector. I get network errors: ```sh | [otel.javaagent 2024-06-07 13:52:40:936 +0000] [OkHttp http://otel:4317/...] ERROR io.opentelemetry.exporter.internal.http.HttpExporter - Failed to export logs. The request could not be executed. Full error message: Connection reset fa | java.net.SocketException: Connection reset fa | at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:328) fa | at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:355) ...   [otel.javaagent 2024-06-07 13:52:42:847 +0000] [OkHttp http://otel:4317/...] ERROR io.opentelemetry.exporter.internal.http.HttpExporter - Failed to export spans. The request could not be executed. Full error message: Connection reset by peer fa | java.net.SocketException: Connection reset by peer fa | at java.base/sun.nio.ch.NioSocketImpl.implWrite(NioSocketImpl.java:425) fa | at java.base/sun.nio.ch.NioSocketImpl.write(NioSocketImpl.java:445) fa | at java.base/sun.nio.ch.NioSocketImpl$2.write(NioSocketImpl.java:831) fa | at java.base/java.net.Socket$SocketOutputStream.write(Socket.java:1035) ```   I'm using the standard configuration file for Splunk Linux Collector - https://github.com/signalfx/splunk-otel-collector/blob/main/cmd/otelcol/config/collector/otlp_config_linux.yaml Below is my docker compose file   ```yaml services: db: image: postgres:latest container_name: fa_db ports: - "5432:5432" environment: PGDATA: /var/lib/postgresql/data/pgdata POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} healthcheck: test: [ "CMD-SHELL", "pg_isready -U postgres" ] interval: 5s timeout: 5s retries: 5 networks: - db_net volumes: - db_data:/var/lib/postgresql/data fa: # image: fusionauth/fusionauth-app:latest image: faimage container_name: fa # command: "tail -f /dev/null" depends_on: db: condition: service_healthy environment: DATABASE_URL: jdbc:postgresql://db:5432/fusionauth DATABASE_ROOT_USERNAME: ${POSTGRES_USER} DATABASE_ROOT_PASSWORD: ${POSTGRES_PASSWORD} DATABASE_USERNAME: ${DATABASE_USERNAME} DATABASE_PASSWORD: ${DATABASE_PASSWORD} FUSIONAUTH_APP_MEMORY: ${FUSIONAUTH_APP_MEMORY} FUSIONAUTH_APP_RUNTIME_MODE: ${FUSIONAUTH_APP_RUNTIME_MODE} FUSIONAUTH_APP_URL: http://fusionauth:9011 SEARCH_TYPE: database FUSIONAUTH_APP_KICKSTART_FILE: ${FUSIONAUTH_APP_KICKSTART_FILE} networks: - db_net ports: - 9011:9011 volumes: - fusionauth_config:/usr/local/fusionauth/config - ./kickstart:/usr/local/fusionauth/kickstart extra_hosts: - "host.docker.internal:host-gateway" otel: image: quay.io/signalfx/splunk-otel-collector:latest container_name: fa_otel environment: SPLUNK_ACCESS_TOKEN: "secret" SPLUNK_REALM: "us1" SPLUNK_LISTEN_INTERFACE: "0.0.0.0" SPLUNK_MEMORY_LIMIT_MIB: "1000" SPLUNK_CONFIG: /config.yaml volumes: - ./config.yaml:/config.yaml networks: - db_net # no host ports are needed as communication is inside the docker network # ports: # - "13133:13133" # - "14250:14250" # - "14268:14268" # - "4317:4317" # - "6060:6060" # - "7276:7276" # - "8888:8888" # - "9080:9080" # - "9411:9411" # - "9943:9943" networks: db_net: driver: bridge volumes: db_data: fusionauth_config: ```   The FusionAuth Dockerfile starts FusionAuth like this:   ```sh exec "${JAVA_HOME}/bin/java" -javaagent:/usr/local/fusionauth/otel.jar -Dotel.resource.attributes=service.name=fusionauth -Dotel.traces.exporter=otlp -Dotel.exporter.otlp.endpoint=http://otel:4317 -cp "${CLASSPATH}" ${JAVA_OPTS} io.fusionauth.app.FusionAuthMain <&- >> "${LOG_DIR}/fusionauth-app.log" 2>&1 ``` Why can't the FusionAuth container connect to http://otel:4317 please?
Hello, I've recently tested a sourcetype for a new input via the props.conf file on my standalone dev environment, and it worked perfectly -datas were correctly parsed -. But when I put it in my pro... See more...
Hello, I've recently tested a sourcetype for a new input via the props.conf file on my standalone dev environment, and it worked perfectly -datas were correctly parsed -. But when I put it in my prod environment, the data which where attributed the sourcetype weren't parsed at all. Now, my prod environment is distributed (HFs->DS->Indexers->SH) but I've been careful to put the sourcetype both in the Heavy forwarder and in the searchhead as recommended, and i've restart both the HF and the SH but it still doesn't work. Does anyone have an idea of what I can do to fix it?
Hello, we have a Red status for Ingestion Latency,  it says the following:    Red: The feature has severe issues and is negatively impacting the functionality of your deployment. For details, see R... See more...
Hello, we have a Red status for Ingestion Latency,  it says the following:    Red: The feature has severe issues and is negatively impacting the functionality of your deployment. For details, see Root Cause.   However, I can't figure out how to see the "Root Cause".  What report should I look at, that would show me where this latency is occurring? Thanks for all of the help, Tom  
Hi Team, Need your assistance for the configuration changes in Splunk. The requirement is to change the Timezone based on different “source” (not sourcetype). We have different sources defined in o... See more...
Hi Team, Need your assistance for the configuration changes in Splunk. The requirement is to change the Timezone based on different “source” (not sourcetype). We have different sources defined in our application. All of them are in their respective server timezone, except for the below 2 sources (these 2 are in EST timezone & our requirement is to change it into CET timezone)     source=/applications/testscan/*/testscn01/* source=/applications/testscan/*/testcpdom/*     For rest of the other sources, I do not want make any change in the Timezone. For example:     source=/applications/testscan/*/testscn02/* source=/applications/testscan/*/testnycus/* source=/applications/testscan/*/testnyus2/* source=/applications/testscan/*/testshape/* source=/applications/testscan/*/testshape2/* source=/applications/testscan/*/testshape3/*     Please note, we do not have any "props.conf" file available or configured in the server.  We are maintaining splunk configuration in only "inputs.conf" file. The present content of "inputs.conf" as below:     [monitor:///applications/testscan/.../] whitelist = (?:tools\/test\/log\/|TODAY\/LOGS\/)*\.(?:log|txt)$ index = testscan_prod sourcetype = testscan _TCP_ROUTING = in_prod [monitor:///applications/testscan/*/*/tools/test_transfer/log] index = testscan_prod sourcetype = testscan _TCP_ROUTING = in_prod [monitor:///applications/testscan/*/*/tools/test_reports/log] index = testscan_prod sourcetype = testscan _TCP_ROUTING = in_prod       Please suggest what changes to be done so that Timezone can be managed based on the "source" information provided. @ITWhisperer 
how to allow splunk to access public.if i am using splunk from diffrent gateway then what will i have to do to use the splunk web.
Post metric according to spec Get metrics in from other sources - Splunk Documentation to HEC.  API reports back HTTP 200, "success".  Stats not viewable in Analytics area. Unable to view index=_in... See more...
Post metric according to spec Get metrics in from other sources - Splunk Documentation to HEC.  API reports back HTTP 200, "success".  Stats not viewable in Analytics area. Unable to view index=_internal, and have no access to main system.  Wanted to troubleshoot. When using the same endpoint for logs, the endpoint is fully functional.   Can anyone provide suggestions to troubleshoot?
Hi all fellow Splunkthiasts, I need your help in understanding what could possibly go wrong in HFW upgrade. We are collecting some logs through HFW and rely on search-time extractions for this sourc... See more...
Hi all fellow Splunkthiasts, I need your help in understanding what could possibly go wrong in HFW upgrade. We are collecting some logs through HFW and rely on search-time extractions for this sourcetype.  After upgrading HFW, extractions stopped working. Data format is basically several Key=Value pairs separated by newline. Investigating this, I found out: There is no props.conf configuration for this sourcetype on upgraded HFW (and it wasn't there even before upgrade). All relevant configuration is on another instance serving as both indexer and search-head. Props.conf in relevant sourcetype on Search-head has AUTO_KV_JSON=true, I don't see KV_MODE in splunk show config props / splunk btool props list (I suppose it takes default value "auto") I have already realized that upgrade didn't take the right path (it was 7.2.6 upgraded to 9.2.1, without gradually upgrading through 8.2). Except search-time extractions, everything seems to work as expected (data is flowing in, event breaking and timestamp extraction seems to be correct). What I don't understand is how can HFW upgrade even affect search-time extractions on another instance. From there I am a bit clueless on what to focus on to fix this. What I don't understand
Hi Team, There is a requirement  to get the license usage split in GB on daily basis for the top 20 log sources along with the host, index and sourcetype details.  So kindly help with the query.
Hello, We have some AWS accounts that use Firehose to forward logs from AWS to Splunk. A few days ago, I received a notification that the number of channels acquired started rapidly and hit the limi... See more...
Hello, We have some AWS accounts that use Firehose to forward logs from AWS to Splunk. A few days ago, I received a notification that the number of channels acquired started rapidly and hit the limit, since then we are not able to send logs any longer to Splunk. Splunk Support helped us to use a new Firehose endpoint, but we still see the ServerBusy because of the limited channels. Is there any option to monitor how our Streams are consuming the Channels and any advice you might have to improve this behaviour?
Hi all, I am trying to integrate MS SQL audit log data with a UF instead of DB Connect. What is the best and recommended way to do it that maps all fields? At the moment it is integrated with the ... See more...
Hi all, I am trying to integrate MS SQL audit log data with a UF instead of DB Connect. What is the best and recommended way to do it that maps all fields? At the moment it is integrated with the UF and using the "Splunk Add-on for Microsoft SQL Server" Also i am seeing one additional dummy event(no values nothing blank event) with every event that is coming My Inputs.conf    [WinEventLog://Security] start_from = oldest current_only = 0 checkpointInterval = 5 whitelist1 = 33205 index = test_mssql renderXml=false sourcetype = mssql:aud disabled = 0    
Following two error repeats every minute in splunkd.log on Splunk Enterprise What is causing this?   06-07-2024 10:45:00.314 +0200 ERROR ExecProcessor [2519201 ExecProcessorSchedulerThread] - mess... See more...
Following two error repeats every minute in splunkd.log on Splunk Enterprise What is causing this?   06-07-2024 10:45:00.314 +0200 ERROR ExecProcessor [2519201 ExecProcessorSchedulerThread] - message from "/data/splunk/bin/python3.7 /data/splunk/etc/apps/search/bin/quarantine_files.py" Quarantine files framework - Unexpected error during execution: Expecting value: line 1 column 1 (char 0) 06-07-2024 10:45:00.314 +0200 ERROR ExecProcessor [2519201 ExecProcessorSchedulerThread] - message from "/data/splunk/bin/python3.7 /data/splunk/etc/apps/search/bin/quarantine_files.py" Quarantine files framework - Setting enable_jQuery2 - Unexpected error during execution: Expecting value: line 1 column 1 (char 0)    
Splunk to slack report integration not displaying all events in results from output. So we have report running which will have below records in output. But Splunk reports triggered to slack will just... See more...
Splunk to slack report integration not displaying all events in results from output. So we have report running which will have below records in output. But Splunk reports triggered to slack will just display only first record in alerts description\summary. How to get entire thing in alert summary\description. UnmappedActions test, some value  test, some value test, some value   base search | stats values(unmapped_actions) as UnmappedActions 
I am newbie to splunk. Any help is appreciated So I have an splunk enterprise in my windows computer. and splunk forwarder in a ubuntu VPS server with a cowrie honeypot built in. So my problem is wh... See more...
I am newbie to splunk. Any help is appreciated So I have an splunk enterprise in my windows computer. and splunk forwarder in a ubuntu VPS server with a cowrie honeypot built in. So my problem is when i try to ping test my local computer with VPS server , i have %100 packet loss. Also splunkd log file is full of "cooked connection to "my-local-ip" timed out and ... blocked nfor blocked_seconds=3000. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. errors Thanks for helping. I am waiting for your response
Hi Team, I have stats group by fields as token it will change dynamically based on time selection. for example if select since 1st Jun 24 then my query will be like below. eventtype="abc" |stats co... See more...
Hi Team, I have stats group by fields as token it will change dynamically based on time selection. for example if select since 1st Jun 24 then my query will be like below. eventtype="abc" |stats count by a,b,c  and if select date before 1st Jun 2024 i.e 30th May 2024 i would like to have stats group by field like below. eventtype="abc" |stats count by a,d,e So my current implementation is putting group by field in token, token will be set based on time selection and final query would be like below. eventtype="abc" |stats count by $groupby_field$ Now the issue is splunk dashboard says waiting for input the moment i add token input to stats groupby field. Appreciate your suggestion/help to handle this scenario.   Thanks, Mani  
Hi Experts, I would like to create the following table from the three events.    ipv4-entry_prefix network-instance_name interface ----------------------------------------------... See more...
Hi Experts, I would like to create the following table from the three events.    ipv4-entry_prefix network-instance_name interface ---------------------------------------------------------------------- 1.1.1.0/24 VRF_1001 Ethernet48   Both event#1 and event#2 have "tags.next-hop-group" field and both event#2 and event#3 have "tags.index" field.All events are stored in the same index. I tried to write a proper SPL to achieve the above, but I couldn't. Could you please tell me how to achieve this?   - event#1 { "name": "fib", "timestamp": 1717571778600, "tags": { "ipv4-entry_prefix": "1.1.1.0/24", "network-instance_name": "VRF_1001", "next-hop-group": "1297036705567609741", "source": "r0", "subscription-name": "fib" } } - event#2 { "name": "fib", "timestamp": 1717572745136, "tags": { "index": "140400192798928", "network-instance_name": "VRF_1001", "next-hop-group": "1297036705567609741", "source": "r0", "subscription-name": "fib" }, "values": { "index": "140400192798928" } } -event#3 { "name": "fib", "timestamp": 1717572818890, "tags": { "index": "140400192798928", "network-instance_name": "VRF_1001", "source": "r0", "subscription-name": "fib" }, "values": { "interface": "Ethernet48" }   Many thanks, Kenji