All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey there, Results of the | fit command are affected by the time range picker.  Once you set the time range to all time, _time is displayed normally.   Edit: I looked into the interaction between ... See more...
Hey there, Results of the | fit command are affected by the time range picker.  Once you set the time range to all time, _time is displayed normally.   Edit: I looked into the interaction between inputlookup + fit + time range picker. As documented here, the result of the fit command are appended to the initial dataset. In this case, the expected outcome would be that the resulting table includes only rows that are covered by the time range picker. However, the following happens: Time range picker: All time Resulting table: Initial dataset + output of fit command Result: OK, expected result Time range picker: Some time before the first observation - now Resulting table: Initial dataset + output of fit command Result: OK, expected result (Warning: The specified span would result in too many (>50000) rows.) Time range picker: About halfway through the dataset timestamps - now Resulting table: Initial dataset + output of fit command Result: OK, unexpected result (Warning: The specified span would result in too many (>50000) rows.) Time range picker: After some time of the last observation - now Resulting table: Initial dataset + output of fit command Result: OK, unexpected result (Warning: The specified span would result in too many (>50000) rows.) Time range picker: Some time before the first observation - some time stamp after the last observation Resulting table: output of fit command Result: NOT OK, unexpected result I checked the sources that were available to me (search.log, .py files) but sadly this did not suffice to reverse engineer how the initial dataset and the output of the fit command are merged and filtered. It seems that earliest has no effect, but once latest is set to a timestamp, the behavior becomes unexpected.
Hi all Is there a way to use one deploy server to push app to 2 different search head clusters? for example I have search head cluster named site1 and I want to install a new search head cluster na... See more...
Hi all Is there a way to use one deploy server to push app to 2 different search head clusters? for example I have search head cluster named site1 and I want to install a new search head cluster named site2 then push to site1 some apps, and to push a different apps to site 2, so I can control which app will be pushed to each site   
Thanks for the answer, but unfortunately that doesn't solve the issue. And I'm puzzled how a platform like SOAR doesnt provide granular user & roles permissions. We should be able to define that a u... See more...
Thanks for the answer, but unfortunately that doesn't solve the issue. And I'm puzzled how a platform like SOAR doesnt provide granular user & roles permissions. We should be able to define that a user can only assing containers/tasks to other users within it's role, instead of everybody(or similar)...  Because the default settings allows a given user to assign a container to whoever user or roles he wishes... Does anyone know if there a way using REST API or playbooks?
Hi @isoutamo , thank you for your support. it was a mistyping, the issue was that the searchmatch() function doesn't run in INGEST_EVAL, ising the match() function, my INGEST_EVAL is working. Than... See more...
Hi @isoutamo , thank you for your support. it was a mistyping, the issue was that the searchmatch() function doesn't run in INGEST_EVAL, ising the match() function, my INGEST_EVAL is working. Thank you again for your support. Ciao. Giuseppe
I could see we need to use splunklib library in custom command creation but when i try to install the library i am getting a exception due to its dependency download which is pycrypto which i underst... See more...
I could see we need to use splunklib library in custom command creation but when i try to install the library i am getting a exception due to its dependency download which is pycrypto which i understood is not supported in splunk version 9.x, is there a alternate way to do it.
Hi @ques_splunk , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma... See more...
Hi @ques_splunk , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hello Every one I have a problem in protocol detection in splunk logs! I see bittorrent Every where in my logs and the traffic is not a bittorrent one! But i track the traffic and its between a ne... See more...
Hello Every one I have a problem in protocol detection in splunk logs! I see bittorrent Every where in my logs and the traffic is not a bittorrent one! But i track the traffic and its between a network device and a monitoring tool. I have DPI (deep packet inspection) installed as an Aux but it seems to be a wrong app detection in splunk. what should i do? is there any help with that? #SPLUNK
What difference are you expecting? Are you trying to say that in your example kb is 1000 based and you want to convert to 1024 based. That is not what memk does. In this case just do | eval KB=round... See more...
What difference are you expecting? Are you trying to say that in your example kb is 1000 based and you want to convert to 1024 based. That is not what memk does. In this case just do | eval KB=round(kb/1.024,3) If they are both 1024 based, then they are the same number, so memk will not do anything.  
Apart from what @richgalloway already pointed out the question is what are you trying to do. If you're trying to spawn a subsearch for each event from the base search... that doesn't work this way. Y... See more...
Apart from what @richgalloway already pointed out the question is what are you trying to do. If you're trying to spawn a subsearch for each event from the base search... that doesn't work this way. You could use map to spawn a separate search for each result row but that's highly ineffective method. You're probably better of with appending two separate result sets and doing some magic on that compound data to get your results.
Thanks, I'll try your suggestion And yes I agree, I think it's a syntax error, that's the error: "Error in 'EvalCommand': The expression is malformed."
It would help to know the error you received, but I suspect it's a syntax error of some sort.  That's because subsearches have to be placed where their results would make semantic sense. IOW, if the... See more...
It would help to know the error you received, but I suspect it's a syntax error of some sort.  That's because subsearches have to be placed where their results would make semantic sense. IOW, if the subsearch produces a result like (original_user=foo OR original_user=bar) then this makes no sense. | eval Name= mvindex((newValue),1) (original_user=foo OR original_user=bar) | stats values(*) as *  Try this, instead (index=<my index>) EventType="A" EventType=A | rename username as original_user | eval Id= mvindex((newValue),0) | eval Name= mvindex((newValue),1) | search [ search index=<my index> <filtering by a string> | eval src_email= mvindex((newValue),3) | rex field=src_email "(?<original_user>[\w\d\.\-]+\@[\w\d\.]+)" | fields original_user | format ] | stats values(*) as * Or this similar query for better performance (index=<my index>) EventType="A" EventType=A [ search index=<my index> <filtering by a string> | eval src_email= mvindex((newValue),3) | rex field=src_email "(?<original_user>[\w\d\.\-]+\@[\w\d\.]+)" | fields original_user | rename original_user as username | format ] | rename username as original_user | eval Id= mvindex((newValue),0) | eval Name= mvindex((newValue),1) | stats values(*) as *
Hello, I'm doing a detection for an event on the same index with 2 logs, I want to filter events of Event A based on if the username field exists with the same value in Event B. I tried doing a sub-... See more...
Hello, I'm doing a detection for an event on the same index with 2 logs, I want to filter events of Event A based on if the username field exists with the same value in Event B. I tried doing a sub-search but I get errors going by the below query, I want to filter Event A by if there are any events from Event B with the same original_user       (index=<my index>) EventType="A" EventType=A | rename username as original_user | eval Id= mvindex((newValue),0) | eval Name= mvindex((newValue),1) [ search index=<same index> <filtering by a string> | eval src_email= mvindex((newValue),3) | rex field=src_email "(?<original_user>[\w\d\.\-]+\@[\w\d\.]+)" | fields original_user] | stats values(*) as *       The above query says my eval is malformed Is there any way to solve it? Append/Join?   I also tested the query inside the sub-search by itself and it works with no issues  
You are great, that worked! Thank you for sharing knowledge.
Hi I have always Makefile which generates deployment ready xxx.spl files from current clients all apps into one directory + combined tar file. Those are easy to transfer and use where they are needed... See more...
Hi I have always Makefile which generates deployment ready xxx.spl files from current clients all apps into one directory + combined tar file. Those are easy to transfer and use where they are needed. r. Ismo
What data you have and what search you have so far?
While Informix is not officially supported - https://docs.splunk.com/Documentation/DBX/3.17.2/DeployDBX/Installdatabasedrivers - you can try to configure it with proper jdbc drivers for your db but y... See more...
While Informix is not officially supported - https://docs.splunk.com/Documentation/DBX/3.17.2/DeployDBX/Installdatabasedrivers - you can try to configure it with proper jdbc drivers for your db but you have to look for them yourself. It might work.
No, in an all-in-one setup you don't have to separately install the for_indexers addon. It's used if you have multi-tier environment because you then install the main ES app on search head(s) which m... See more...
No, in an all-in-one setup you don't have to separately install the for_indexers addon. It's used if you have multi-tier environment because you then install the main ES app on search head(s) which means you don't have indexes defined on the indexer tier. But in an all-in-one installation you install the ES app on the component working as both indexer and search head so the indexes should be created during installation. The indexes themselves (the data directories) should be in the same place as all the other indexers so by default it would be /opt/splunk/var/lib/splunk If you want to see where are the configs that define notable index run spluni btool index list notable --debug
Also to ask this, all the indexes referred here in this doc,  https://docs.splunk.com/Documentation/ES/7.3.2/Install/Indexes Under specific app, are these apps installed when I install ES, and al... See more...
Also to ask this, all the indexes referred here in this doc,  https://docs.splunk.com/Documentation/ES/7.3.2/Install/Indexes Under specific app, are these apps installed when I install ES, and also after installing SPlunk_TA_FORIndexers, will I have access to all this indexes listed above .  How are the apps associated there installed on my all-in-one instance, are the apps above isntalled when I installed ES and the indexes are installed when i install the TA?  This just has my head confused a bit, thank you for answering all this!
Hello, new to splunk. I am trying to exclude certain applications in an SPL search, specifically by app name.  What field would I need to consider in order to apply the '!=' boolean plus app name? Th... See more...
Hello, new to splunk. I am trying to exclude certain applications in an SPL search, specifically by app name.  What field would I need to consider in order to apply the '!=' boolean plus app name? Thanks again.