All Topics

Top

All Topics

hi    i have registered for Splunk cloud and clicked start free trail, but still didn't receive the email with Splunk cloud free trail account details, like creds and link.
Dear  All , Some Dynamic Sources in my environment are ingesting more data into Splunk and License limit get breach. So Is there any way to detect this source as outliner through MLTK . i.e. C... See more...
Dear  All , Some Dynamic Sources in my environment are ingesting more data into Splunk and License limit get breach. So Is there any way to detect this source as outliner through MLTK . i.e. Cisco ASA Source type  has multiple sources(firewall) which ingest around 10 GB data on daily basis  suddenly one day  license usage reach to 20 GB. how to identify which source sent more data into Splunk without creating manual threshold or average of data.
Am new here I want to install and use slpunk on my iPhone and Mac, how do I install where do I start. 
Hello, Getting Action forbidden error when going to "https://<hostname>/en-US/app/search/analytics_workspace" on Splunk Cloud. Please note: I have logged with sc_admin role. Thanks
I've set up a dev 9.2 Splunk environment. And I'm trying to use a self-signed cert to secure forwarding. But every time I attempt to connect the UF to the Indexing server it fails -_- I've tried ... See more...
I've set up a dev 9.2 Splunk environment. And I'm trying to use a self-signed cert to secure forwarding. But every time I attempt to connect the UF to the Indexing server it fails -_- I've tried a lot of permutations of the below. All ultimately ending with the forwarder unable to connect to the indexing server. I've made sure permissions are set to 6000 for cert and key. Made sure the Forwarder and Indexer have seperate common names. And created multiple cert types. But I'm at a bit of a loss as to what I need to do to get the forwarder and indexer to connect over a self signed certificate. Any help is incredibly appreciated. Below is some of what I've attempted. Trying to not make this post multiple pages long X) Simple TLS Configuration Generating Indexer Certs: openssl genrsa -out indexer.key 2048 openssl req -new -x509 -key indexer.key -out indexer.pem -days 1095 -sha256 cat indexer.pem indexer.key > indexer_combined.pem Note: I keep reading that the cert and key need to be 1 file. But I"m not sure on this. Generating Forwarder Certs: openssl genrsa -out forwarder.key 2048 openssl req -new -x509 -key forwarder.key -out forwarder.pem -days 1095 -sha256 cat forwarder.pem forwarder.key > forwarder_combined.pem Indexer Configuration: [SSL] serverCert = /opt/tls/indexer_combined.pem sslPassword = random_string requireClientCert = false [splunktcp-ssl:9997] compressed = true Outcome: Indexer listens on port 9997 for encrypted communications. Forwarder Configuration [tcpout] defaultGroup = splunkssl [tcpout:splunkssl] server = 192.168.110.178:9997 compressed = true [tcpout-server://192.168.110.178:9997] sslCertPath =/opt/tls/forwarder_combined.pem sslPassword = random_string sslVerifyServerCert = false Outcome: Forwarder fails to communicate with Indexer Logs from Forwarder: ERROR TcpInputProc [27440 FwdDataReceiverThread] - Error encountered for connection from src=192.168.110.26:33522. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol Testing with openssl s_client: Command: openssl s_client -connect 192.168.110.178:9997 -cert forwarder_combined.pem -key forwarder.key Output: Unknown CA ( I didn't write the exact message in my notes, but it generally says the CA is unknown.) Note: Not sure if I need to add sslVersions = tls1.2, but that seems outside of the scope of the issue. Troubleshooting connect, running openssl s_client raw: Command: openssl s_client -connect 192.168.110.178:9997 Output received: CONNECTED(00000003) Can't use SSL_get_servername Full s_client message is here: https://pastebin.com/z9gt7bhz Further Troubleshooting Added Indexers self-signed certificate to forwarder ... sslPassword = random_string sslVerifyServerCert = true sslRootCAPath = /opt/tls/indexer_combined.pem Outcome: same error message. Testing with s_client: Command: openssl s_client -connect 192.168.110.178:9997 -CAfile indexer_combined.pem Connecting to 192.168.110.178 CONNECTED(00000003) Can't use SSL_get_servername Full s_client message is here: https://pastebin.com/BcDvJ2Fs
Hey folks, been a while - I have a question I figured community would be better to answer:   We have a multisite cluster using SmartStore built in AWS.  We are not going to new Splunk but need to b... See more...
Hey folks, been a while - I have a question I figured community would be better to answer:   We have a multisite cluster using SmartStore built in AWS.  We are not going to new Splunk but need to be able to access the data for the next 7 years and thus want to age it out, but it may need to be searched from time-to-time.   I understand we can convert to a Free license. However, does the architecture impact that (namely, that there is cluster replication)? Or is it possible to have a single standalone instance with Splunk Free to search as needed?
Right now, on a SOAR events/cases/playbooks menu page, a user can select a page size of 5, 10, 15, 25 or 50 which is the number of events, cases or playbooks to be displayed in a browser page. Is the... See more...
Right now, on a SOAR events/cases/playbooks menu page, a user can select a page size of 5, 10, 15, 25 or 50 which is the number of events, cases or playbooks to be displayed in a browser page. Is there a way to change that setting, for example, can SOAR display 100 events in one page? Thank you.
Wake up, babe! New .conf25 dates AND location just dropped!! That's right, this year, .conf25 is taking place September 8 - 11 in Boston, MA!  Experience the new era of .conf with more techn... See more...
Wake up, babe! New .conf25 dates AND location just dropped!! That's right, this year, .conf25 is taking place September 8 - 11 in Boston, MA!  Experience the new era of .conf with more technical content, more opportunities to build connections with industry leaders, more cutting-edge innovations, and yes –– even more fun. Buckle up for three days of tackling your biggest business challenges, getting the most value from the Splunk products you know and love, earning certifications, and learning new skills from experts and users just like you. Sign up now to get notified when registration opens – you don’t want to miss the chance to: Engage with Splunk solution engineers, product developers, executives, and trust members. Be the first to learn about new product announcements. Connect with thousands of industry peers. Accelerate your career by participating in interactive sessions and becoming Splunk certified. Get hands-on experience with Splunk’s entire product portfolio. Dance the night away at the Search Party! For those members of the community interested in speaking at .conf, the call for speakers will be going out late Winter 2025, so stay tuned for that as well! If you have any questions, please consult the .conf25 FAQs and we can't wait to see you there!!
Hi All i have a bar chart, like this one, in some condition this may have a lot of values that need to be reported, but, as you can imagine, is not very readable is possible to specify a minimu... See more...
Hi All i have a bar chart, like this one, in some condition this may have a lot of values that need to be reported, but, as you can imagine, is not very readable is possible to specify a minimum size of each bar and enable to scroll bar to see (clearly....) all events?  Thanks.
I currently have the issue that I want to trigger a certain alert, let's call it unusual processes or logins.  now, I've created a search - in which I find the specific events that are considered su... See more...
I currently have the issue that I want to trigger a certain alert, let's call it unusual processes or logins.  now, I've created a search - in which I find the specific events that are considered suspicious, and I save it as a sheduled search and as an action I write it into the triggered alerts. the timeframe is -20m@m till -5m@m and the cron job is for every 5 minutes. now I see that there is an issue in that case, because if I cron the job every 5 minutes, given the look back timeframe, I'm getting at least 3 of the same events triggered as an alert.    now my question is, is there an option/way to trigger based on whether or not an event already occured ? so basically that the search looks - did I trigger that event before already? if yes, then don't write it in the triggered alerts, otherwise, write it in  the triggered alerts.    every help is appreciated
Hi it's possible to log on Splunk using Laminas\Log\Writer? ...I'll try to do but with some problem...do you have any esemple of how to do it?
The current universal forwarder 9.0.9 included in SOAR 6.2.2 is being flagged for an openssl vulnerability. Does anyone know what version UF is packaged in the 6.3.1 SOAR release?
Hi team,   I am sarthak from Splunk support team.   IHAC, who had a specific requirement and requested Splunk support to create an app to meet their needs. The Splunk developer team created the a... See more...
Hi team,   I am sarthak from Splunk support team.   IHAC, who had a specific requirement and requested Splunk support to create an app to meet their needs. The Splunk developer team created the app and uploaded it to Splunkbase. The customer downloaded and installed it on their ES search head, successfully fulfilling their requirement.Now, the customer wants to remove the app from Splunkbase, as they do not want their customized app to be used by other organizations or individuals. They would like to ensure that the app is no longer available for download or visible on Splunkbase. This is the app which has been created and provided to the customer and now the customer wants to delete this app from the Splunk base. Can I get all details about this app, so that I can provide the same details to the necessary team to get  a suitable way to accomplish the customer need.   Thanks
I'm not able to even open a support ticket as there's a required field i can't fill in. tried with both gmail and my company account, there's no email/domain filtering in our domain.  
Dear Splunkers, running version 9.3.1 and I would like to perform a search in which I would like to identify what are the most common hours trucks have been visiting my site location. My search que... See more...
Dear Splunkers, running version 9.3.1 and I would like to perform a search in which I would like to identify what are the most common hours trucks have been visiting my site location. My search query is following: | addinfo | eval _time = strptime(Start_time,"%m/%d/%Y %H:%M") | addinfo | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity") | search Plate!=0 | search Location="*" | timechart span=1h count by Plate limit=50 Like this Im able see trucks visiting location by time in a span. How to continue to display what are the most common hours during which my trucks visiting locations. Thank you
Hello,   I have a Heavy Forwarder, and it was configured just to forward not index: [indexAndForward] index = false I tried to install the DB Connect App on that HF but we faced th... See more...
Hello,   I have a Heavy Forwarder, and it was configured just to forward not index: [indexAndForward] index = false I tried to install the DB Connect App on that HF but we faced the below ERROR:   Any Ideas?
Hi Splunk Experts, I'v been trying to apply three condition, but I'm bit complicating. So would like to have some inputs. I have a runtime search which will produce three fields Category, Data, P... See more...
Hi Splunk Experts, I'v been trying to apply three condition, but I'm bit complicating. So would like to have some inputs. I have a runtime search which will produce three fields Category, Data, Percent and I join/ append some data from lookup using User. The lookup has multi-value fields which are prefixed with Lookup. User Category Data Percent LookupCategory LookupData LookupPercent LookupND1 LookupND2 User094 103 2064 3.44 101 102 104 7865 4268 1976 7.10 3.21 3.56 4.90 2.11 3.10 2.20 1.10 0.46 User871 102 5108 5.58 103 3897 7.31 5.23 2.08 User131 104 664 0.71 103 104 105 2287 1576 438 0.22 0.30 0.82 0.11 0.08 0.50 0.11 0.02 0.32 User755 104 1241 1.23 102 104 4493 975 0.97 1.12 0.42 1.01 0.55 0.11 My conditions are as follow: 1. Use Precedence Category if it's greater than current Category. For Ex below dataset: The Category is 103, I have to check which is the max(LookupPercent) between 101 to 103 and use it if the value in (101 or 102) is greater than 103. User094 103 2064 3.44 101 102 104 7865 4268 1976 7.10 3.21 3.56 4.90 2.11 3.10 2.20 1.10 0.46 2. Ignore if the LookupCategory has no CategoryValue equal to or greater than In below case Category is 102, but the lookup has only 103, but no data between 101 to 102. So ignore. User871 102 5108 5.58 103 3897 7.31 5.23 2.08 3. If the Lookup Current Category Percent is lesser than immediate following category, then find abs difference of Current Category with lookup Category and immediate following Category using Data field and if immediate following is near then use immediate following category. LookupCategory 104's Percent 0.30 is less than 105's 0.82. So as further step abs(664 - 1576) and abs(664 - 438), as (664 - 438) is less than (664 - 1576), the 105's row data should be filtered/ used. User131 104 664 0.71 103 104 105 2287 1576 438 0.22 0.30 0.82 0.11 0.08 0.50 0.11 0.02 0.32 4. Straight forward, none of above condition matches Same lookupCatagory 104's row should be used for Category 104. User755 104 1241 1.23 102 104 4493 975 0.97 1.12 0.42 1.01 0.55 0.11
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3... See more...
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3 months). Once data reaches the frozen state, how can I control the frozen storage if the frozen disk becomes full? How does Splunk handle the frozen storage in such scenarios? [custom_index] repFactor = auto homePath = volume:hot/$_index_name/db coldPath = volume:cold/$_index_name/colddb thawedPath = /opt/thawed/$_index_name/thaweddb homePath.maxDataSizeMB = 1664000 coldPath.maxDataSizeMB = 1664000 maxWarmDBCount = 200 frozenTimePeriodInSecs = 7776000 maxDataSize = auto_high_volume coldToFrozenDir = /opt/frozen/custom_index/frozendb
I have been trying to set up splunk on my Kubernetes cluster so i can use it with a python script to access the rest API. i have a splunk enterprise standalone instance running. i used traefik ... See more...
I have been trying to set up splunk on my Kubernetes cluster so i can use it with a python script to access the rest API. i have a splunk enterprise standalone instance running. i used traefik ingress to expose port 8089      apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: splunk-ingress namespace: splunk annotations: cert-manager.io/cluster-issuer: letsencrypt-issuer traefik.ingress.kubernetes.io/router.entrypoints: websecure spec: ingressClassName: common-traefik tls: - hosts: - splunk.example.com secretName: app-certificate rules: - host: splunk.example.com http: paths: - path: / pathType: Prefix backend: service: name: splunk-stdln-standalone-service port: number: 8089       when i try to curl to the client it returns internal server error       curl -X POST https://splunk.example.com/services/auth/login --data-urlencode username=admin --data-urlencode password=<mysplunkpassword> -k -v       output:     * Host splunk.example.com:443 was resolved. * IPv6: (none) * IPv4: xx.xx.xxx.xxx * Trying xx.xx.xxx.xxx:443... * Connected to splunk.example.com (xx.xx.xxx.xxx) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * (304) (IN), TLS handshake, Server hello (2): * (304) (IN), TLS handshake, Unknown (8): * (304) (IN), TLS handshake, Certificate (11): * (304) (IN), TLS handshake, CERT verify (15): * (304) (IN), TLS handshake, Finished (20): * (304) (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=splunk.example.com * start date: Dec 6 23:53:06 2024 GMT * expire date: Mar 6 23:53:05 2025 GMT * issuer: C=US; O=Let's Encrypt; CN=R10 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://splunk.example.com/services/auth/login * [HTTP/2] [1] [:method: POST] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: splunk.example.com] * [HTTP/2] [1] [:path: /services/auth/login] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [content-length: 34] * [HTTP/2] [1] [content-type: application/x-www-form-urlencoded] > POST /services/auth/login HTTP/2 > Host: splunk.example.com > User-Agent: curl/8.7.1 > Accept: */* > Content-Length: 34 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 34 bytes < HTTP/2 500 < content-length: 21 < date: Mon, 09 Dec 2024 06:54:50 GMT < * Connection #0 to host splunk.example.com left intact Internal Server Error%     when i port forward to localhost the curl works     curl -X POST https://localhost:8089/services/auth/login --data-urlencode username=admin --data-urlencode password=<mysplunkpassword> -k -v     output:     Note: Unnecessary use of -X or --request, POST is already inferred. * Host localhost:8089 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:8089... * Connected to localhost (::1) port 8089 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * (304) (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 / [blank] / UNDEF * ALPN: server did not agree on a protocol. Uses default. * Server certificate: * subject: CN=SplunkServerDefaultCert; O=SplunkUser * start date: Dec 9 02:21:04 2024 GMT * expire date: Dec 9 02:21:04 2027 GMT * issuer: C=US; ST=CA; L=San Francisco; O=Splunk; CN=SplunkCommonCA; emailAddress=support@splunk.com * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. * using HTTP/1.x > POST /services/auth/login HTTP/1.1 > Host: localhost:8089 > User-Agent: curl/8.7.1 > Accept: */* > Content-Length: 34 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 34 bytes < HTTP/1.1 200 OK < Date: Mon, 09 Dec 2024 06:59:54 GMT < Expires: Thu, 26 Oct 1978 00:00:00 GMT < Cache-Control: no-store, no-cache, must-revalidate, max-age=0 < Content-Type: text/xml; charset=UTF-8 < X-Content-Type-Options: nosniff < Content-Length: 204 < Connection: Keep-Alive < X-Frame-Options: SAMEORIGIN < Server: Splunkd < <response> <sessionKey> {some sessionKey...} </sessionKey> <messages> <msg code=""></msg> </messages> </response> * Connection #0 to host localhost left intact      I am using default confs not sure if i need to update my server.conf  for this more context: i checked the splunkd.log from when i made the request and i get these logs: 12-09-2024 17:19:36.904 +0000 WARN SSLCommon [951 HTTPDispatch] - Received fatal SSL3 alert. ssl_state='SSLv3 read client key exchange A', alert_description='bad certificate'. 12-09-2024 17:19:36.904 +0000 WARN HttpListener [951 HTTPDispatch] - Socket error from 192.168.xx.xx:52528 while idling: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate
Hi Team, We have a requirement to mask/filter data before ingestion at Splunk cloud environment. Custom has Splunk Cloud .  I am reading through ingest processor and ingest actions documentation. S... See more...
Hi Team, We have a requirement to mask/filter data before ingestion at Splunk cloud environment. Custom has Splunk Cloud .  I am reading through ingest processor and ingest actions documentation. Sounds like both do pretty much the same capability in Splunk cloud.    Is there any major difference in these?    Thanks, SGS