What would be the proper way to push an authentication.conf from the deployer and have the bind password not left in clear text? Is it possible to push the authentication from the deployer without th...
See more...
What would be the proper way to push an authentication.conf from the deployer and have the bind password not left in clear text? Is it possible to push the authentication from the deployer without the bind password and then add another authentication.conf manually to each search head in system local with only the bind password in the stanza? After restart of the search head cluster I’m thinking the bind password would then be encrypted? Would this be the proper way to do this? Would appreciate any other suggestions.
Hello. I know this is an old post, but running into this same issue with the bind password being insecure on the deployer. What would be the proper way to push an authentication.conf from the deploye...
See more...
Hello. I know this is an old post, but running into this same issue with the bind password being insecure on the deployer. What would be the proper way to push an authentication.conf from the deployer and have the bind password not left in clear text? Is it possible to push the authentication from the deployer without the bind password and then add another authentication.conf manually to each search head in system local with only the bind password in the stanza?
Create an init block which sets the default values for stageToken and indexToken <init>
<set token="stageToken">test</set>
<set token="indexToken">ap</set>
</init>
Is there any issue with the below settings ? Also is the Regex wrong here ? [sourcetype] TRANSFORMS-filter = setnull,stanza transforms: [setnull]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue...
See more...
Is there any issue with the below settings ? Also is the Regex wrong here ? [sourcetype] TRANSFORMS-filter = setnull,stanza transforms: [setnull]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue [stanza] REGEX = "Snapshot created successfully" DEST_KEY = queue FORMAT = indexQueue A
Thank you for replying. No. I don't use proxy server. My server is in my home. Windows Server is one and Windows clients are two. Own server cannot access local IP address, and my clients too. An...
See more...
Thank you for replying. No. I don't use proxy server. My server is in my home. Windows Server is one and Windows clients are two. Own server cannot access local IP address, and my clients too. And my router doesn't become proxy server. I will try when I access 192.168.0.8 from my client pc, server's firewall access log is written or not.
What unit of time is your BatteryAge in, seconds, hours, days? How long is a month? If your current day is the 5th of the month and the age equates to 40 days, what result would you expect?
Unfortunatly, i still get null values with these changes. I'm trying to get a comprehensive dashboard, that shows every sourcetype, pr. index, with a first event time, and last event time, to see ...
See more...
Unfortunatly, i still get null values with these changes. I'm trying to get a comprehensive dashboard, that shows every sourcetype, pr. index, with a first event time, and last event time, to see when we started logging events, and to see if we suddenly stop, or have an unusually large gap since last event. We want to set up an alarm to notify us, if an index havnt recieved an event of a specific sourcetype, within a given threshold of time. (Sorry if my english is slightly off here). This specific dashboard is supposed to be a complete sort of dictionary over our indexes and sourcetypes
What is it you are trying to achieve? Can you still get what you want if you try these changes? | sort 0 sourcetype
| stats list(TotalEvents) AS TotalEvents list(FirstEvent) AS "First Even...
See more...
What is it you are trying to achieve? Can you still get what you want if you try these changes? | sort 0 sourcetype
| stats list(TotalEvents) AS TotalEvents list(FirstEvent) AS "First Event" by index, sourcetype
@gcusello - You were correct, bad code I hadn't understood the requirement for <fieldForValue>MountedOn</fieldForValue> Once set, the drop down populates Thank you very much !!
I found this very usefull search for a dashboard on gosplunk: | rest /services/data/indexes | dedup title | fields title | rename title AS index | map maxsearches=1500 search="| metadata t...
See more...
I found this very usefull search for a dashboard on gosplunk: | rest /services/data/indexes | dedup title | fields title | rename title AS index | map maxsearches=1500 search="| metadata type=sourcetypes index=\"$index$\" | eval Retention=tostring(abs(lastTime-firstTime), \"duration\") | convert ctime(firstTime) ctime(lastTime) | sort lastTime | rename totalCount AS \"TotalEvents\" firstTime AS \"FirstEvent\" lastTime AS \"LastEvent\" | eval index=\"$index$\"" | fields index sourcetype TotalEvents FirstEvent LastEvent Retention | sort sourcetype | stats list(sourcetype) AS SourceTypes list(TotalEvents) AS TotalEvents list(FirstEvent) AS "First Event" by index | append [| rest /services/data/indexes | dedup title | fields title | rename title AS index] | dedup index | fillnull value=null SourceTypes TotalEvents "First Event" "Last Event" Retention | sort index | search index=* (SourceTypes=*) However, when i first ran it, some of the "lastevent" values appeared correctly. Ever since then, "LastEvent" and "Retention" have allways been "Null". I cant figure out why i dont get any return values on these fields. I got an error saying the limit on "list" command of 100 was surpassed. So i tried replacing "list()" with "values()" in the search, but the result is the same, just without the error.
It is important where you put your settings. Parsing is done on the first "heavy" component in event's path to indexers. So if you have a HF as an intermediate forwarder, you need to put your props/...
See more...
It is important where you put your settings. Parsing is done on the first "heavy" component in event's path to indexers. So if you have a HF as an intermediate forwarder, you need to put your props/transforms there. Of course you will still be getting already indexed events during searching index-time transforms are applied only for the new events.
OK. But do you have just one column with multiple values? Or do you have multiple columns? How would your lookup contents match the data you want to search for?
It highly depends on the components involved. But this is a fairly normal functionality for SOAR playbook to get an artifact, manipulate it, check it using configured external services and return a r...
See more...
It highly depends on the components involved. But this is a fairly normal functionality for SOAR playbook to get an artifact, manipulate it, check it using configured external services and return a report or use the result of suhch check to modify behaviour in further part of a playbook. You can download the community version of Splunk SOAR and see for yourself.
Thank you for your response! Could you please share your insights on how we can achieve this in a Splunk SOAR environment? Additionally, if there are any apps on Splunkbase that provide similar funct...
See more...
Thank you for your response! Could you please share your insights on how we can achieve this in a Splunk SOAR environment? Additionally, if there are any apps on Splunkbase that provide similar functionality, I would greatly appreciate your recommendations.
I have a lookup file saved with a single column having values of specific fields in it. And want to use to search in query which matched with values in field names Example: lookupname : test.csv ...
See more...
I have a lookup file saved with a single column having values of specific fields in it. And want to use to search in query which matched with values in field names Example: lookupname : test.csv column name: column1 fieldname: field1
Hi @redmandba , if a search gives results, can be used in a dropdown. Can you share the code of your dropdown? maybe the issue is in the other parameters. Ciao. Giuseppe
Hi @shoaibalimir , the formula is always the same, but anyway, on Splunk Cloud, you don't need to think to the required storage, because you have only to think about how many logs must be indexed ev...
See more...
Hi @shoaibalimir , the formula is always the same, but anyway, on Splunk Cloud, you don't need to think to the required storage, because you have only to think about how many logs must be indexed every day, required storege is a problem of Splunk Cloud administrators. In your contract you shoudl have defined the daily indexed volume and the retention period, storage isn't your problem. The license consuption and the storage entitlement are two related but different values, you have to put attention only one license consuption to avoid to exceed the limit too times. Ciao. Giuseppe