All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi I said that for working dev environment you should have at least 4vCPU and 8GB memory. But even more important is that your disks can perform at least 800IOPS preferred is 1200+ IOPS. This should... See more...
Hi I said that for working dev environment you should have at least 4vCPU and 8GB memory. But even more important is that your disks can perform at least 800IOPS preferred is 1200+ IOPS. This should apply both Splunk binary/var and splunk indexer data disks. One way to test this is use Bonnie++ or some similar tool. Of course if you see that information from your infra tools it's enough. r. Ismo
This indicates  that the CPU is spending a significant amount of time waiting for I/O  (typically disk) as your ingesting/parsing data/searching, so with Splunk you need to size it sufficiently, othe... See more...
This indicates  that the CPU is spending a significant amount of time waiting for I/O  (typically disk) as your ingesting/parsing data/searching, so with Splunk you need to size it sufficiently, otherwise you will get those messages, remember Splunk is a workhorse and needs resources:   Have a look at the below to posts, I recently had replied to around iowait   https://community.splunk.com/t5/Splunk-Enterprise/IOWAIT-Mystery-What-is-it-Is-it-important/m-p/690256#M19597    https://community.splunk.com/t5/Splunk-Enterprise/Splunk-Enterprise-how-does-it-detect-IOWAIT-warning-or-error/m-p/690444#M19605    Go through these questions https://docs.splunk.com/Documentation/Splunk/9.2.1/Capacity/Performancechecklist    Look at the guide in terms of performance recommendations  https://docs.splunk.com/Documentation/Splunk/9.2.1/Capacity/Summaryofperformancerecommendations  In summary I think you will need to bump up your specifications, but for a dev environment, you can ignore those messages, unless it's starts to crawl and become unbearable. 
You can do it by overwriting the field, or just create a new field or use the rangemap, there are so many ways to do it - you can also use fieldformat, which will display a value, but retain the orig... See more...
You can do it by overwriting the field, or just create a new field or use the rangemap, there are so many ways to do it - you can also use fieldformat, which will display a value, but retain the original - see this example how after the stats, the severity retains its numerical value and also the stats will still split by the different numerical values. | makeresults count=100 | eval severity=random() % 5 + 1 | rangemap field=severity low=1-3 medium=4-4 high=5-5 | fieldformat severity=case(severity<=3, "low", severity=4, "medium", severity=5, "high") | stats count by severity | eval x=severity
Hi @hazem , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @hazem , in my opinion it should run as is, but to change the identity password is a very easy step. Ciao. Giuseppe
Hi at all, I have a new doubt about the sequence of activities during indextime. I have a data flow, arriving from HEC on an HF that I need to elaborate it because these data arrive from a concentr... See more...
Hi at all, I have a new doubt about the sequence of activities during indextime. I have a data flow, arriving from HEC on an HF that I need to elaborate it because these data arrive from a concentrator and are relative to many different data flows (linux, oracle, etc...), so I have to assign the correct sourcetype to these data and I have to elaborate logs because they are modified by securelog: the original logs are inserted in a field of json adding some metadata. I configured the following flow: in props.conf: [source::http:logstash*] TRANSFORMS-000 = global_set_metadata TRANSFORMS-001 = set_sourcetype_by_regex TRANSFORMS-001 = set_index_by_sourcetype in transforms.conf: [global_set_metadata] INGEST_EVAL = host := coalesce(json_extract(_raw, "host.name"), json_extract(_raw, "host.hostname")), relay_hostname := json_extract(_raw, "hub"), source := "http:logstash".coalesce("::".json_extract(_raw, "log.file.path"), "") [set_sourcetype_by_regex] INGEST_EVAL = sourcetype := case(searchmatch("/var/log/audit/audit.log"), "linux_audit", true(), "logstash") [set_index_by_sourcetype] INGEST_EVAL = index:=case(sourcetype=linux, "index_linux", sourcetype=logstash, "index_logstash") in which: the first transformation extract (using INGEST_EVAL) metadata as host, source and relay_hostname (the concentrator from which the logs arrive), the second one assign the correct sourcetype based on a regex. the third one assign the correct index based on sourcetype and usig INGEST_EVAL to avoid to re-run a regex, the first two transformations are correctly executed, but the third doesn't use the sourcetype assigned by the second one. I also tried a different approach using CLONE_SOURCETYPE in the second one (instead of INGEST_EVAL) and it runs, but I'm verifying if the above flow can run because it's more linear and should be less heavy for the system. Where could I search the issue? is there something wrong in the activity flow? Thank you to all. Ciao. Giuseppe
HI @gcusello  we need to migrate the DB-connect app from the old HWF to the New one, so the plan is to copy the whole DB-connect app from the old to the new HWF. In the above scenario, will the ide... See more...
HI @gcusello  we need to migrate the DB-connect app from the old HWF to the New one, so the plan is to copy the whole DB-connect app from the old to the new HWF. In the above scenario, will the identities work fine or do I need to retype the identity again? As we forgot the identity of all databases  
Hi @goncalo , surely the issue is that the default home page is fixed and it isn't possible to define an home page based on the user or the role, the only way is to create another home page common t... See more...
Hi @goncalo , surely the issue is that the default home page is fixed and it isn't possible to define an home page based on the user or the role, the only way is to create another home page common to all the roles. In this way the DASHBOARD1 will be not visible for the not enabled users and you'll not have the error page. In all my apps, I always insert a general Home Page to use as a menu and an introduction to the app. Ciao. Giuseppe
Hi @hazem , what's your isue? connection password are encrypted iby Splunk and it isn't possible to decrypt them. If you loose the encrypted password, they are the ones defined i the database, so ... See more...
Hi @hazem , what's your isue? connection password are encrypted iby Splunk and it isn't possible to decrypt them. If you loose the encrypted password, they are the ones defined i the database, so you can reset them on the DB and change on DB-Connect. So what's your problem? Ciao. Giuseppe
Hi, Thank you for the response I am very sure that we fulfil these requirements. No ingestion takes palace, because there are no Splunk processes running. So to be clear, it is not Splunk that ... See more...
Hi, Thank you for the response I am very sure that we fulfil these requirements. No ingestion takes palace, because there are no Splunk processes running. So to be clear, it is not Splunk that hangs, but the systemctl command to stop Splunkd.service. The Splunk processes has been stopped. But the systemtl command does not comes back in the prompt. I can see in splunkd.log that Splunk has stopped. "ps -ef splunk" : no splunk processes Regards, Harry
It's probably my own paranoia but I try not to overwrite a data field like this in case I have to use the original data field for whatever reason. But functionally this would do what I need, I just d... See more...
It's probably my own paranoia but I try not to overwrite a data field like this in case I have to use the original data field for whatever reason. But functionally this would do what I need, I just didn't know if there was a more Splunk-y way to do it.
You can use rangemap simply | makeresults count=100 | eval severity=random() % 5 + 1 | rangemap field=severity low=1-3 medium=4-4 high=5-5
What's wrong with setting value in the same field?  Given this mock data Severity 1 1 5 4 4 3 3 1 1 2 3 2 2 and this added to your search,   | eval Se... See more...
What's wrong with setting value in the same field?  Given this mock data Severity 1 1 5 4 4 3 3 1 1 2 3 2 2 and this added to your search,   | eval Severity = if(Severity < 4, "lump", Severity)   You will get Severity lump lump 5 4 4 lump lump lump lump lump lump lump lump Is this what you are looking for? (By the way, to pose an answerable question, it is always good to post sample/mock data, desired output, and explain the logic between illustrated data and desired output.) Play with this emulation and compare with real data   | makeresults format=csv data="Severity 1 1 5 4 4 3 3 1 1 2 3 2 2" ``` data emulation above ```  
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i c... See more...
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i can download these drivers?I am using linux environment.   Thanks in advance.
Ok thanks for the answer. That really cleared it up.
Hi  Which is the API Token and URL did you guys use? I try 2 different and did not have success. I'm using Splunk Cloud with the App for SentinelOne (not the TA or IA), is that ok?   Regards
I have a field in my data named severity that can be one of five values: 1, 2, 3, 4, and 5. I want to chart on the following: 1-3, 4, and 5.  Anything with a severity value of 3 or lower can be lump... See more...
I have a field in my data named severity that can be one of five values: 1, 2, 3, 4, and 5. I want to chart on the following: 1-3, 4, and 5.  Anything with a severity value of 3 or lower can be lumped together, but severity 4 and 5 need to be charted separately. The coalesce command is close but in my case the key is the same, it's the value that changes.  None of the mv commands look like they do quite what I need, nor does nomv.   The workaround I've considered doing is an eval command with an if statement to say if the severity is 1, 2, or 3, set a new field value to 3, then chart off of this new field.  It feels janky, but I think it would give me what I want. Is it possible to do what I want in a more elegant manner?
@jprior Technically the parameter to control macro depth is documented as max_macro_depth = <integer> * Maximum recursion depth for macros. Specifies the maximum levels for macro expansion. * It i... See more...
@jprior Technically the parameter to control macro depth is documented as max_macro_depth = <integer> * Maximum recursion depth for macros. Specifies the maximum levels for macro expansion. * It is considered a search exception if macro expansion does not stop after this many levels. * Value must be greater than or equal to 1. * Default: 100 The word 'recursion' is used in the description of the 'max_macro_depth'  parameter and also in the error you get when you try to use macros recursively as in your example, so whilst one could get into a debate about the use of the word 'recursion' and 'recursive', it's really just about depth, so macro A expands macro B, which expands C and so on.  We use the term nested macros, rather than recursive macros, which as you've discovered is not possible. When you know that macros are expanded before the search and cannot be affected by the data in the events, recursion is in practice impossible. We regularly use nested macros to a number of levels in some of our frameworks as macros lend themselves to creating structure. For example,  you can define `my_macro(type_a)` where 'type_a' is a fixed value and the definition has type as an argument, which then expands to  `nested_macro_$type$` so you can use fixed values in macro calls to reference somewhat dynamic macro trees. Reference to limits.conf here https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Limitsconf#Parsing  
Thanks for closing the loop
The Splunk way of doing this sort of task is to use stats, so you search both data sets, combine the bits you want based on the common field and then do conditional logic on the results, e.g. index=... See more...
The Splunk way of doing this sort of task is to use stats, so you search both data sets, combine the bits you want based on the common field and then do conditional logic on the results, e.g. index=sample cf_org_name=orgname service=xyz sourceSystem!="aaa" (errorCd="*-701" status=FAILED) OR status=SUCCESS | stats min(eval(if(status="FAILED", _time, null()))) as _time values(status) as status count by accountNumber jobNumber letterId errorCd | where status="FAILED" AND mvcount(status)=1 which searches both failed and success events, and then combines them with stats, but retaining _time IFF the event is failed and split by the 4 fields. Without knowing your data, I don't know if letterId and errorCd have a 1:1 correlation with jobNumber, so you'll have to work out if that will work for you. Then the final where condition will only look for events that have ONLY recorded a FAILED status. Subsearches have their uses, but generally using NOT clauses is inefficient and a single search (no subsearches) is often a better approach.