According to Splunk documentation, you can uses SAML with tokens: "Create authentication tokens to use the REST APIs. Tokens are available for both native Splunk authentication and external authenti...
See more...
According to Splunk documentation, you can uses SAML with tokens: "Create authentication tokens to use the REST APIs. Tokens are available for both native Splunk authentication and external authentication through either the LDAP or SAML schemes. To learn more about setting up authentication with tokens, see Set up authentication with tokens in the Securing Splunk Enterprise manual." There are some SAML side requirements such as (per token doc): "Single Sign-On (SSO) schemes that use SAML. These schemes must either support Attribute Query Requests (AQR) or provide information through scripted authentication extensions." Hope this helps!
Our dashboards were made in HTML with custom js and now is considered a vulnerability and we have to rewrite all dashboards in dashboard studio. This is what we used to view the dashboard in a xml fi...
See more...
Our dashboards were made in HTML with custom js and now is considered a vulnerability and we have to rewrite all dashboards in dashboard studio. This is what we used to view the dashboard in a xml file, <view template="app-name:/templates/file.html"> <label>Name of the app</label> </view> Apparently the line element is always on top no matter the order in source code. For example in the photo attached, the PTS block with the red circle must be on top of the green, blue, orange lines(the example is from the old dashboard). As far as i understood, html is not supported in dashboard studio and i can't find another way to solve this problem.
I also did a dbinspect on this index and searched for the bicketId. It gives below: bucketId aaaaaa~183~839799B0-6EAF-436C-B12A-2CDC010C1319 eventCount 1660559027 eventCount 0 guId B5D4AECD-273A-...
See more...
I also did a dbinspect on this index and searched for the bicketId. It gives below: bucketId aaaaaa~183~839799B0-6EAF-436C-B12A-2CDC010C1319 eventCount 1660559027 eventCount 0 guId B5D4AECD-273A-4CB5-88B4-F6C5C75C3564 hostCount 0 id 183 index aaaaaa modTime 01/30/2024:15:24:57 path /opt/splunk/data/cold/aaaaaa/db_1660559027_1659954230_183_839799B0-6EAF-436C-B12A-2CDC010C1319 rawSize 0 sizeOnDiskMB 3.078125 sourceCount 0 sourceTypeCount 0 splunk_server server1.bez.nl startEpoch 1659954230 state cold tsidxState full I don't understand the fact that it says it is in cold. This index (as all on these servers) are migrated to Smartstore. so this path is wrong. Am i missing something? And the eventcount 0? rawsize 0? but also a startEpoch and endEpoch without events?
Hi, We have two indexes wich are stuck in fixeup task. Our environment exist off some indexing peers wich are atached to smartstore. This mornig there is a warning no sf and rf is met. Two ind...
See more...
Hi, We have two indexes wich are stuck in fixeup task. Our environment exist off some indexing peers wich are atached to smartstore. This mornig there is a warning no sf and rf is met. Two indexes are in this degraded state. Checking the bucket status there are two buckets from two different indexes whish doesn't get fixed. Those buckets are mentioned in the search factor fix, replication factor fix and generation. The last has the notice "No possible primaries". Searching on the indexer which is mentioned in the bucket info it says: DatabaseDirectoryManager [838121 TcpChannelThread] - unable to check if cache_id="bid|aaaaaa~183~839799B0-6EAF-436C-B12A-2CDC010C1319|" is stable with CacheManager as it is not present in CacheManager and ERROR ClusterSlaveBucketHandler [838121 TcpChannelThread] - Failed to trigger replication (err='Cannot replicate remote storage enabled warm bucket, bid=aaaaaa~183~839799B0-6EAF-436C-B12A-2CDC010C1319 until it's uploaded' what can be wrong, and what to do about it? Thanks in advance Splunk enterprise v9.0.5, on premisse smartstore.
I have a lookup file like below, the query should send mails to each person with that respective row information. and if mail1 column is empty, then query should consider mail2 column value to send m...
See more...
I have a lookup file like below, the query should send mails to each person with that respective row information. and if mail1 column is empty, then query should consider mail2 column value to send mails. and if mail2 column is empty, the query should consider mail3 column value to send mail. and if mail1, mail2 are empty then query should consider mail3 column value to send mail. Emp occupation location firstmail secondarymail thirdmail abc aaa hhh aa@mail.com gg@mail.com def ghjk gggg bb@mail.com ff@mail.com ghi lmo iiii hh@mail.com jkl pre jjj dd@mail.com mno swq kkk aa@mail.com ii@mail.com example, aa@mail.com..should receive mail like below in tabluar format Emp occupation location firstmail secondarymail thirdmail abc aaa hhh aa@mail.com gg@mail.com mno swq kkk aa@mail.com ii@mail.com so likewise query should read complete table and send mails to persons individually....containing that specific row information in tabluar format. Please help me with the query and let me know incase of any clarification on the requirement.
Hello fellow Splunkthusiasts! TL;DR: Is there any way to connect one indexer cluster to two distinct license servers? Our company has two different licenses: one acquired directly by the compa...
See more...
Hello fellow Splunkthusiasts! TL;DR: Is there any way to connect one indexer cluster to two distinct license servers? Our company has two different licenses: one acquired directly by the company (we posses the license file) the other was acquired by a corporate group to which our company belongs, it is provided to us through group's license server (it is actually some larger license split to several pools, one of them being available to us). The obvious solution is to have one IDXC for each license with SHs searching both clusters. However, both licenses together are approximately 100GB/day, therefore building two independent indexer clusters feels like a waste of resources. What is the best way to approach this?
Hi,
After migrating to version 9.1.2 we have to rewrite some classic dashboards in dashboard studio. Is there a way to send the colored lines to the back or send the circles to the front? It simply...
See more...
Hi,
After migrating to version 9.1.2 we have to rewrite some classic dashboards in dashboard studio. Is there a way to send the colored lines to the back or send the circles to the front? It simply won't work to put any figure on top of lines, the lines will always be on top. I tried to insert some html customization but still nothing
(<row>
<panel>
<html>
<style>
div[data-id*="_CIRCLE"]{
z-index: 100;
}
</style>
</html>
</panel>
</row>)
Any help would be much appreciated.
You don't need to specify field=_raw as this is the default field. Anyway, you just need to follow your extraction with a space. | rex "status is\s(?<status>[^\s]+)\s"
Hi everyone, i need an alternative for the transaction command, bcoz its taking to much time to load the dashboard, this is my actual data Botid count 1528 1 122...
See more...
Hi everyone, i need an alternative for the transaction command, bcoz its taking to much time to load the dashboard, this is my actual data Botid count 1528 1 1228 1 1015 1 1558 1 12 1 1698 1 1589.15 1 1589 1 am looking for an output like below BotId count 1528,1228,1015,1558 1 12,1698,1589.2,1589 2 thanks in advance
Some like this: ,\s(\d\d\.\d\d\.\d\d\s\w+\s\d+\w+\d\d)\s Or do you like to do it props.conf to set the _time field? TIME_FORMAT = %z, %T %a %d%b%y Try it out here: https://strftime.net/
So I want to extract the last word as a field on each search result but want to grab those that only fulfils the following conditions: 1) the last word before space 2) exclude those with a period "...
See more...
So I want to extract the last word as a field on each search result but want to grab those that only fulfils the following conditions: 1) the last word before space 2) exclude those with a period "." right after the last word sample events: the current status is START system goes on … the current status is STOP please do ….. the current status is PENDING. And my rex will extract the words from “status is “ and the word right after, but if that word has a period right after, I don’t want to extract. I only been able to retrieve everything using the following, but not able to exclude those with a period right after. rex field=_raw "status is\s(?<status>[^\s]+)"
Hello Team, I need help in extracting the following date and time from the log, sample log: -0900, 04.25.01 THU 22FEB24 nDD62320I I need the 04.25.01 THU 22FEB24 part, could someone please help in...
See more...
Hello Team, I need help in extracting the following date and time from the log, sample log: -0900, 04.25.01 THU 22FEB24 nDD62320I I need the 04.25.01 THU 22FEB24 part, could someone please help in extracting this using rex Any help is much appreciated
Currently, I am switching to a higher version of the Lookup Editor app, but I am having "issues" as described below. Ver 3.3.3 Ver 4.0.2 Cells have values (low, medium, high, ..) that do n...
See more...
Currently, I am switching to a higher version of the Lookup Editor app, but I am having "issues" as described below. Ver 3.3.3 Ver 4.0.2 Cells have values (low, medium, high, ..) that do not change the background color or text. I checked the console.log output (Ver 4.0.2) and got some logs. Can anyone give me some advice? Thank you.