All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hello, I have 2 files that contains the path of the root Certificate Authority that issued my server certificate. Not sure if the above is logically correct but i do have 2 files that have the path... See more...
hello, I have 2 files that contains the path of the root Certificate Authority that issued my server certificate. Not sure if the above is logically correct but i do have 2 files that have the paths to the certificate authority, and I must use both of them somehow/someway in web.conf. my web.conf settings are similar to the below: ### START SPLUNK WEB USING HTTPS:8443 ### enableSplunkWebSSL = 1 httpport = 8443 ### SSL CERTIFICATE FILES ### privKeyPath = $SPLUNK_HOME\etc\auth\DOD.web.certificates\privkey.pem serverCert = $SPLUNK_HOME\etc\auth\DOD.web.certificates\cert.pem sslRootCAPath = However, for "sslRootCAPath =", i need to add both of the files i have. However, the splunk documentation does not specify how I would be able to add these files, or if I can add them with ",", or if there is another property that would allow additional files with certificate authority paths. the files and sample of their content are below. Basically they are made-up of "noise" except for the header and footer section of the file. So not sure how I would combine them into a single file if that is what splunk would like. ca-200.pem -----BEGIN CERTIFICATE----- MIIEjzCCA3egAwIBAgICAwMwDQYJKoZIhvcNAQELBQAwWzELMAkGA1UEBhMCVVMxGDAWBgNVBAoT -----END CERTIFICATE----- MyCompanyRootCA3.pem -----BEGIN CERTIFICATE----- MIIDczCCAlugAwIBAgIBATANBgkqhkiG9w0BAQsFADBbMQswCQYDVQQGEwJVUzEY -----END CERTIFICATE---- any help is apreciated.  
Hi! Thank you! UUID did not work but Group ID did This was my mis-out thank you
The Sedcmd is mainly used to transform the raw data mask or delete lines etc not the event, so I don't think it would work ( I can't test) and I don't think you can use SEDCMD on the UF level.  The ... See more...
The Sedcmd is mainly used to transform the raw data mask or delete lines etc not the event, so I don't think it would work ( I can't test) and I don't think you can use SEDCMD on the UF level.  The option you may have is to use send data with no data to the null queue, so worth a go. Configure the below and deploy on the indexer.  This would send any data with no data to null queue (again I don't have a test environment, so you will have to trail and error it) props.conf (Add your sourcetype here)  [wineventlog] TRANSFORMS-my_windows_blank_events = setnull transforms.conf [setnull] REGEX = ^\s*$ DEST_KEY = queue FORMAT = nullQueue    
I just realized I was way, way overthinking this.  BY DEFINITION, a p99 value is one that 99% of the events are at or below, thus leaving ONE PERCENT of the total number of events higher.   All I nee... See more...
I just realized I was way, way overthinking this.  BY DEFINITION, a p99 value is one that 99% of the events are at or below, thus leaving ONE PERCENT of the total number of events higher.   All I need is: index=foo message="magic string" | stats count as total | eval over_p99=round(total / 100, 0) | fields - total which is fast.  If I wanted # over p95, just use round(total / 100 * 5, 0) etc.  I will be keeping your approach in mind, however, as that's still really good to know.  Ty!
Yes you are on the right path, search time is done via the SH, even after its cooked and sent to the indexers.  Here's some concepts: Search Time Extractions: This is where the RAW data extracti... See more...
Yes you are on the right path, search time is done via the SH, even after its cooked and sent to the indexers.  Here's some concepts: Search Time Extractions: This is where the RAW data extractions happen dynamically – done on the SH when the SPL is run – this is called scheme on the fly, not when the data is ingested – and designed so its fast than unlike traditional SQL databases. These have the TA’s installed to do one the fly extractions via props/trans and the right way for field extractions on the fly  Forwarders: UF OR HF designed to forward data, attach meta data like sourcetype, host, index. The HF can further parse data and extract fields (Cooked) less work for the Indexers.  Indexers, store the raw data and can also do parsing of the data if they are configured to do so (having TA’s)  and mainly when sending data directly from UF’s. Forwarders such as HF  affect how the data is pre-processed before it reaches the indexers, they apply sourcetype, host, index metadata. The UF's also apply metadata, but not parse unless some structured data types.    HF = By performing parsing and field extraction at index time, the data is pre-formatted – this can help with the indexer not having to more performant. Applies metatdata  UF = Sends the raw data to indexer, which means the indexer must handle the processing of the data and sometimes can become overloaded, so better to do it at the HF level if you can. Applies metatdata  These can possible reasons for it not working,  the upgrade should not do this, unless there was some config in the default props.conf file, and this has been overwritten, any custom code should be placed into the local props.conf file : Props/transforms changes (on the SH/Indexers)  Changes to the TA may have been modified Inputs changes to metadata sourcetype name  Overwritten props in default    This may further help understand the Index time vs Search Time process.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Indextimeversussearchtime Might be worth running btool on the search head and checking where the custom props is configured /opt/splunk/bin/splunk btool props list --debug my:sourcetype Typically is should look like the example below:  /opt/splunk/etc/apps/My_Custom_app_props/local/props.conf or OR   /opt/splunk/etc/system/local/props.conf     
Hi @marco_massari11 , you should run something ike this (if the search is only on index, host, source and sourcetype | tstats count WHERE index=your_index BY host | append [ | input... See more...
Hi @marco_massari11 , you should run something ike this (if the search is only on index, host, source and sourcetype | tstats count WHERE index=your_index BY host | append [ | inputlookup your_lookup.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 If you have to use a more comples search, you can use <your_search> | stats count BY host | append [ | inputlookup your_lookup.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 Ciao. Giuseppe
Same here but we use Ansbile.
I had made the cahnge to props.conf but I'm not seeing anything show up today (06/07/24).  06/06/24 logs were fine as I'd expect. I'd like to make sure I did the props.conf right: I put this under ... See more...
I had made the cahnge to props.conf but I'm not seeing anything show up today (06/07/24).  06/06/24 logs were fine as I'd expect. I'd like to make sure I did the props.conf right: I put this under etc/system/local/props.conf [auditd] {is this the right text here?  I thought I had read this should be the sourcetype.  I got the sourcetype from inputs.conf of rlog.sh under the Splunk_TA_nis app.} TIME_FORMAT=%m/%d/%y %T:%3N {I did this to mimic what I see when I do a search from the splunk webserver and this is what shows up at the left before the actual log entry}
Can you screenshot a mock up of the setup screen as you configured it with your details redacted? 
Hello, I need to monitor some critical devices (stored in a lookup file) connected to the Crowdstrike console, in particular if they will be disconnected from it. We receive one event every 2 hours ... See more...
Hello, I need to monitor some critical devices (stored in a lookup file) connected to the Crowdstrike console, in particular if they will be disconnected from it. We receive one event every 2 hours for each device from Crowdstrike device json input in Splunk, so basically if after 2 hours there is not the new event, the alert should trigger reporting the hostname. Has anyone some idea for implementing this?
Did you put "default" as the org
Hi. I am new to splunk. I have configured everything. I am trying to solve this issue for 2 days. I have universal forwerder on the ubuntu server with different network. I have downloaded splunk ent... See more...
Hi. I am new to splunk. I have configured everything. I am trying to solve this issue for 2 days. I have universal forwerder on the ubuntu server with different network. I have downloaded splunk enterprise to my windows 10 computer. My port 9997 is enabled. Firewall is disabled. Even with zyxel interface i bypassed the port 9997. My splunk is listening on port 9997. The thing is with telnet from any other source to my computer (i tried with both my mobile internet and UF client) is still getting denied. How should i proceed to make it work. Im stuck so bad Thanks for your helps this is the mobile internet test with Test-NetConnections to my pc (splunk server i guess) ComputerName : x.x.x.x <desired.connection> RemoteAddress : x.x.x.x <desired connection> RemotePort : 9997 InterfaceAlias : Wi-Fi SourceAddress : X.x.x.x <my ip> PingSucceeded : False PingReplyDetails (RTT) : 0 ms TcpTestSucceeded : False  
We have a pre-production environment which is totally separate to our production environment. A number of our dashboards utilise hyperlinks back into source systems to view complete records, and part... See more...
We have a pre-production environment which is totally separate to our production environment. A number of our dashboards utilise hyperlinks back into source systems to view complete records, and part of the urls are different due to IP addresses on the two environemnts. What I would like to do is be able to set a global setting so that we can utilise the same dashboards without having to recode the link when we migrate from pre-produciton to production. I have tried to use a marco but as these are limited to search it did not effect the link target, is there any other way that we can do this without having to put in steps in migration to ammend the URLs? Thanks, Steven
Hey guys I have a question about CSS in splunk, I want to move these displays like this: And I can't use dashboard studio because I need some special things that dahsboard studio doesn't have ye... See more...
Hey guys I have a question about CSS in splunk, I want to move these displays like this: And I can't use dashboard studio because I need some special things that dahsboard studio doesn't have yet, like depends functionality, I'm trying with css grid but I couldn't find a way to do this yet
Hi gcusello, Thank you for your time to review this, and your quick reply! We do in fact have heavy forwarders at each tenant, so there is local control over any traffic passing through those. That ... See more...
Hi gcusello, Thank you for your time to review this, and your quick reply! We do in fact have heavy forwarders at each tenant, so there is local control over any traffic passing through those. That said though, I am told we have over 40,000 endpoints reporting directly to the cloud , so we definitely do not know the hostnames for everything. Each tenant does have their own HEC key for those submissions, so that would be about the only way to identify and separate that traffic. In my original posting, my intended means to make these changes was via a custom inputs.conf file on all the HFs and UFs rather than modifying the events on arrival, since the inputs file is already at the source. Aside from the overhead and logistics of having to deploy that file to that many workstations and servers, I was wondering if there's any downside to this approach from the perspective of the Splunk agent itself. The ability to do override some/all of any portion of any .conf files is built into the design, so it seems like a custom inputs .conf wouldn't be a problem in any way. Any feedback on that approach? Thank you!
I just posted a question that was immediately rejected. How do I get it approved please?
I use the OpenTelemetry Java agent to monitor FusionAuth in one Docker container, and send the output to the Splunk OpenTelemetry Docker container in Gateway mode.   Here's a diagram of my system a... See more...
I use the OpenTelemetry Java agent to monitor FusionAuth in one Docker container, and send the output to the Splunk OpenTelemetry Docker container in Gateway mode.   Here's a diagram of my system architecture:   ```mermaid graph LR subgraph I[Your server] direction LR subgraph G[Docker] H[(Postgresql)] end subgraph C[Docker] direction BT D(OpenTelemetry for Java) --> A(FusionAuth) end subgraph E[Docker] B(Splunk OpenTelemetry collector) end end C --> G C --> B E --> F(Splunk web server) style I fill:#111 ```   The Splunk container runs correctly and exports sample data to Splunk Observability Cloud. I can see it in the dashboard.   FusionAuth and the Java agent run correctly. But the Otel sender cannot send to the Otel collector. I get network errors: ```sh | [otel.javaagent 2024-06-07 13:52:40:936 +0000] [OkHttp http://otel:4317/...] ERROR io.opentelemetry.exporter.internal.http.HttpExporter - Failed to export logs. The request could not be executed. Full error message: Connection reset fa | java.net.SocketException: Connection reset fa | at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:328) fa | at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:355) ...   [otel.javaagent 2024-06-07 13:52:42:847 +0000] [OkHttp http://otel:4317/...] ERROR io.opentelemetry.exporter.internal.http.HttpExporter - Failed to export spans. The request could not be executed. Full error message: Connection reset by peer fa | java.net.SocketException: Connection reset by peer fa | at java.base/sun.nio.ch.NioSocketImpl.implWrite(NioSocketImpl.java:425) fa | at java.base/sun.nio.ch.NioSocketImpl.write(NioSocketImpl.java:445) fa | at java.base/sun.nio.ch.NioSocketImpl$2.write(NioSocketImpl.java:831) fa | at java.base/java.net.Socket$SocketOutputStream.write(Socket.java:1035) ```   I'm using the standard configuration file for Splunk Linux Collector - https://github.com/signalfx/splunk-otel-collector/blob/main/cmd/otelcol/config/collector/otlp_config_linux.yaml Below is my docker compose file   ```yaml services: db: image: postgres:latest container_name: fa_db ports: - "5432:5432" environment: PGDATA: /var/lib/postgresql/data/pgdata POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} healthcheck: test: [ "CMD-SHELL", "pg_isready -U postgres" ] interval: 5s timeout: 5s retries: 5 networks: - db_net volumes: - db_data:/var/lib/postgresql/data fa: # image: fusionauth/fusionauth-app:latest image: faimage container_name: fa # command: "tail -f /dev/null" depends_on: db: condition: service_healthy environment: DATABASE_URL: jdbc:postgresql://db:5432/fusionauth DATABASE_ROOT_USERNAME: ${POSTGRES_USER} DATABASE_ROOT_PASSWORD: ${POSTGRES_PASSWORD} DATABASE_USERNAME: ${DATABASE_USERNAME} DATABASE_PASSWORD: ${DATABASE_PASSWORD} FUSIONAUTH_APP_MEMORY: ${FUSIONAUTH_APP_MEMORY} FUSIONAUTH_APP_RUNTIME_MODE: ${FUSIONAUTH_APP_RUNTIME_MODE} FUSIONAUTH_APP_URL: http://fusionauth:9011 SEARCH_TYPE: database FUSIONAUTH_APP_KICKSTART_FILE: ${FUSIONAUTH_APP_KICKSTART_FILE} networks: - db_net ports: - 9011:9011 volumes: - fusionauth_config:/usr/local/fusionauth/config - ./kickstart:/usr/local/fusionauth/kickstart extra_hosts: - "host.docker.internal:host-gateway" otel: image: quay.io/signalfx/splunk-otel-collector:latest container_name: fa_otel environment: SPLUNK_ACCESS_TOKEN: "secret" SPLUNK_REALM: "us1" SPLUNK_LISTEN_INTERFACE: "0.0.0.0" SPLUNK_MEMORY_LIMIT_MIB: "1000" SPLUNK_CONFIG: /config.yaml volumes: - ./config.yaml:/config.yaml networks: - db_net # no host ports are needed as communication is inside the docker network # ports: # - "13133:13133" # - "14250:14250" # - "14268:14268" # - "4317:4317" # - "6060:6060" # - "7276:7276" # - "8888:8888" # - "9080:9080" # - "9411:9411" # - "9943:9943" networks: db_net: driver: bridge volumes: db_data: fusionauth_config: ```   The FusionAuth Dockerfile starts FusionAuth like this:   ```sh exec "${JAVA_HOME}/bin/java" -javaagent:/usr/local/fusionauth/otel.jar -Dotel.resource.attributes=service.name=fusionauth -Dotel.traces.exporter=otlp -Dotel.exporter.otlp.endpoint=http://otel:4317 -cp "${CLASSPATH}" ${JAVA_OPTS} io.fusionauth.app.FusionAuthMain <&- >> "${LOG_DIR}/fusionauth-app.log" 2>&1 ``` Why can't the FusionAuth container connect to http://otel:4317 please?
Hello, I've recently tested a sourcetype for a new input via the props.conf file on my standalone dev environment, and it worked perfectly -datas were correctly parsed -. But when I put it in my pro... See more...
Hello, I've recently tested a sourcetype for a new input via the props.conf file on my standalone dev environment, and it worked perfectly -datas were correctly parsed -. But when I put it in my prod environment, the data which where attributed the sourcetype weren't parsed at all. Now, my prod environment is distributed (HFs->DS->Indexers->SH) but I've been careful to put the sourcetype both in the Heavy forwarder and in the searchhead as recommended, and i've restart both the HF and the SH but it still doesn't work. Does anyone have an idea of what I can do to fix it?
Hello, we have a Red status for Ingestion Latency,  it says the following:    Red: The feature has severe issues and is negatively impacting the functionality of your deployment. For details, see R... See more...
Hello, we have a Red status for Ingestion Latency,  it says the following:    Red: The feature has severe issues and is negatively impacting the functionality of your deployment. For details, see Root Cause.   However, I can't figure out how to see the "Root Cause".  What report should I look at, that would show me where this latency is occurring? Thanks for all of the help, Tom  
I use Eggplant https://www.keysight.com/us/en/product/EG1000A/eggplant-test.html    ( https://support.eggplantsoftware.com/downloads/eggplant-functional-downloads ) Eggplant understands GUI controls... See more...
I use Eggplant https://www.keysight.com/us/en/product/EG1000A/eggplant-test.html    ( https://support.eggplantsoftware.com/downloads/eggplant-functional-downloads ) Eggplant understands GUI controls natively - doesn't just check the HTML source - it actually checks what the user sees. Had great success testing swing clients, mobile apps across multiple platforms. You would probaly have to write some scripting to check the math.