All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If I Import... do you import thru zip file extract?.. if so, it will give you a warning msg about file already exist.  are you importing by outputlookup command? Pls give us more details, thanks. 
C:\Program Files\Splunk\var\run\7514f26da673bbe6.tar.gz may we know if this file is available, with read permission to the Splunk User? looks like the Splunk user got no read-write permission to Sp... See more...
C:\Program Files\Splunk\var\run\7514f26da673bbe6.tar.gz may we know if this file is available, with read permission to the Splunk User? looks like the Splunk user got no read-write permission to Splunk home directory.. pls verify it, thanks.  
I am installing Python for Scientific Computing AddOn application but there is an error like this :    Error during app install: failed to extract app from C:\Program Files\Splunk\var\run\7514f26da... See more...
I am installing Python for Scientific Computing AddOn application but there is an error like this :    Error during app install: failed to extract app from C:\Program Files\Splunk\var\run\7514f26da673bbe6.tar.gz to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\df9ffe7f1b8aef48: The system cannot find tthe path specified.   what should I do to solve this problem?  
Dear splunkers: @PickleRick  @richgalloway @isoutamo  can you give me some suggestions to help me solve my problem? Thank you!
Any suggestions?
If I Import into an existing lookup.csv will the current contents be overwritten?
Can you provide me with step-by-step instructions on how to ingest Cayosoft Administrator logs into Splunk? 
>>>As our programming skills are non-existent (but we can follow instructions well), we wonder if it's >>>possible to set up this workflow and what possibilities exist related to it all. We find it c... See more...
>>>As our programming skills are non-existent (but we can follow instructions well), we wonder if it's >>>possible to set up this workflow and what possibilities exist related to it all. We find it challenging to >>>comprehend the possibilities available. >>>Is it simple for non-programmers to follow, or does it require a robust infrastructure or extra investment >>>for certain use cases (perhaps payment to Twitter)? >>>Some user cases might be achievable through multiple other easy methods? 1) yep, Splunk does not require programming skills at all (95% of the Splunk tasks can be done without programming). 2) Splunk is a very simple, easy to use tool.. best one for non-programmers, IT managers and Decision makers. that is why Splunk is the market leader for a decade.  3) for twitter hashtag sentiment analysis, you dont need any extra payment or anything. for Proof of a Concept, you can use the free splunk. For production environment, you can get a Splunk License, which is also very cheap.  >>>While our use case is culturally important. As hastags are not so big, it likely generates a small amount of >>>results. However, we'd like to understand the potential costs if the hashtag were to trend, resulting in >>>thousands of daily results (e.g., 5% with images). What would be the approximate cost for saving 5000+ >>>results per day/month? What is the limits here? this task looks like a simple task, Splunk can handle very easily.  >>> Is Splunk the right tool for this kind of work? Are there pre-made templates or samples/posts to guide >>> this type of work? AI directed us to this software when we sought help with saving hashtags. Splunk is the best tool for these kind of tasks. there are hundreds of pre-made templates (dashboards). the AI's suggestion of using Splunk... you can understand how right Splunk for this task.  >>> Are there any requirements for the Twitter API? the twitter to Splunk integration is the bit troublesome task in this project. There were great Splunk apps for tweets integration to Splunk(for example - https://splunkbase.splunk.com/app/1162 , but recently(around one or two years), these apps got some troubles and they are not working(i have tested this around 2 months ago). Pls check this app - https://splunkbase.splunk.com/app/4619  all looks good with this app.. let me test this one and update you back.  Questions - 1) may i know how you have the tweets and hashtags details...  I assume you have the tweets and hashtags as an excel sheet. do you have this excel sheet or you dont have it yet.  2) the tweets should be sent to Splunk automatically or someone will download the tweets and upload it to Splunk manually  
Just make status indicate success of fail and then do this ... | fillnull validate_by value=null | eval status = case(valid_by="Invokema", 1,valid_by="null", 0, true(), -1) | stats count by status
Greetings Splunk's! My use case is quite straightforward: We aim to save and monitor (secondarily) some rare hashtags from the Twitter platform (unsure if it's possible on Facebook and other platfo... See more...
Greetings Splunk's! My use case is quite straightforward: We aim to save and monitor (secondarily) some rare hashtags from the Twitter platform (unsure if it's possible on Facebook and other platforms). We would like hashtags related posts to be chronologically arranged from the beginning to end, viewable in HTML format, and exportable to Excel (at minimum).  Additionally, if possible: 2. It would be ideal to see statistics on which accounts/dates used the hashtags the most. Further possibilities include analyzing post comments, likes, and the followers/following of the posters—if feasible, in the future. We've downloaded the following software for our Linux platform: https://download.splunk.com/products/splunk/releases/9.1.2/linux/splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb As our programming skills are non-existent (but we can follow instructions well), we wonder if it's possible to set up this workflow and what possibilities exist related to it all. We find it challenging to comprehend the possibilities available. Is it simple for non-programmers to follow, or does it require a robust infrastructure or extra investment for certain use cases (perhaps payment to Twitter)? Some user cases might be achievable through multiple other easy methods? We'd like the workflow to be more of a "set and forget" with the ability to easily change the hashtag/search word in the future if possible. We are currently in a 14/60 day trial (unsure if it's 60 or 14 days and what is included in that) and would like to know the future cost after the free usage ends. While our use case is culturally important. As hastags are not so big, it likely generates a small amount of results. However, we'd like to understand the potential costs if the hashtag were to trend, resulting in thousands of daily results (e.g., 5% with images). What would be the approximate cost for saving 5000+ results per day/month? What is the limits here? Is Splunk the right tool for this kind of work? Are there pre-made templates or samples/posts to guide this type of work? AI directed us to this software when we sought help with saving hashtags. Please feel free to provide thorough explanations for any part of our questions and include links to samples. We are comfortable copying and pasting and editing scripts but cannot follow programming logic ourselves. Are there any requirements for the Twitter API? Such as a separate account login to avoid posing a threat to our main account (as it seems X does not allow browsing the site without logging in anymore). If there are sample user cases related to my needs and what is possible, I would also like to see screens or videos about it Thank you for all the help and patience. splunk@arvutiministeerum.ee - Extra information can be sent to our email as well.  // Is 'Knowledge Management' the correct spot for that post? Feel free to transfer it if there is a more appropriate sub-forum. // Margus
You need more than a CSV file to monitor the users.  Presuming you are logging user login failures in Splunk, you can filter them using the CSV file like this. index=foo [ | inputlookup mylookup.csv... See more...
You need more than a CSV file to monitor the users.  Presuming you are logging user login failures in Splunk, you can filter them using the CSV file like this. index=foo [ | inputlookup mylookup.csv | fields <<user name field>> | rename <<user name field>> AS <<indexed user name field>> ]
I have a csv file with the user list and I want to create an alert to monitor the user login failure alert from the user list. How do I use the lookup file, can you please let me know?
I got some data when I started up the application, but not since then. There are about 60 users, so I should be seeing something. Any ideas on what I can do to get it to collect data? https://imgur... See more...
I got some data when I started up the application, but not since then. There are about 60 users, so I should be seeing something. Any ideas on what I can do to get it to collect data? https://imgur.com/a/S2s4Cgr It's been running for probably a couple hours now.
At the risk of spoiling @mnj1809's homework, I would condense this to a single eval statement: | eval ip=mvmap(mvsort(mvmap(ip, mvjoin(mvmap(split(ip, "."), substr("00".ip, -3)), "."))), mvjoin(mvma... See more...
At the risk of spoiling @mnj1809's homework, I would condense this to a single eval statement: | eval ip=mvmap(mvsort(mvmap(ip, mvjoin(mvmap(split(ip, "."), substr("00".ip, -3)), "."))), mvjoin(mvmap(split(ip, "."), coalesce(nullif(ltrim(ip, "0"), ""), "0")), "."))  However, understanding nested mvmap magic is also homework.
Hi @gcusello, Have you considered creating one new local tcp (or udp) input and corresponding syslog output per source and using _SYSLOG_ROUTING in a transform to send the events to the new local in... See more...
Hi @gcusello, Have you considered creating one new local tcp (or udp) input and corresponding syslog output per source and using _SYSLOG_ROUTING in a transform to send the events to the new local inputs for re-parsing? The new inputs would have a default sourcetype setting defined for the source. Splunk's syslog outputs have hard-coded queue maxSize values, so you'll need to increase parallelIngestionPipeline to scale if needed. +-- type1 --- outputA -------+ | | syslog --> input1 --+-- type2 --- outputB ----+ | | | | +-- type3 --- outputC -+ | | | | | +---------------------------------------+ | | | +---------------------------------------+ | | | +---------------------------------------+ | | | | | +-> inputA --- - - - | | | +----> inputB --- - - - | +-------> inputC --- - - - You may also be able to do the same with an additional local splunktcp input, output group, _TCP_ROUTING, and a modified default route to re-parse cooked data, but I'd be wary of unintentional feedback loops.
This question is nearly the same as one you asked at https://community.splunk.com/t5/Alerting/And-condition/m-p/670558#M15557 .  Please show how you used the answers there to craft a query that solve... See more...
This question is nearly the same as one you asked at https://community.splunk.com/t5/Alerting/And-condition/m-p/670558#M15557 .  Please show how you used the answers there to craft a query that solves this problem and how that query fails to meet expectations. As stated in the referenced answer, the AND operator works only on a single event and cannot compare values among several events.
I have below requirement Log info: 09.00PM Xyz event received for customernumber:1234 Log info: 09.05 PM abc event received for customernumber:1234 Loginfo :09.10PM pqr event received for customer... See more...
I have below requirement Log info: 09.00PM Xyz event received for customernumber:1234 Log info: 09.05 PM abc event received for customernumber:1234 Loginfo :09.10PM pqr event received for customernumber:1234   Like that n number of customer number is there in the splunk log   I want to check ,whether all 3 event has been received for customernumber or not    For that I tried using AND operation    Like "xyz received for customernumber" AND "abc event received for customernumber" AND "PQR event received for customernumber|rex (I tried retrieve customernumber)   Which is not working .pls help with exact query 
Hi @jhooper33, Internally, your regular expression compiles to a length that exceeds the offset limits set in PCRE2 at build time. For example, the regular expression (?<bar>.){3119} will compile: ... See more...
Hi @jhooper33, Internally, your regular expression compiles to a length that exceeds the offset limits set in PCRE2 at build time. For example, the regular expression (?<bar>.){3119} will compile: | makeresults | eval foo="anything" | rex field=foo "(?<bar>.){3119}" but this regular expression (?<bar>.){3120} will not: | rex field=foo "(?<bar>.){3120}" Error in 'rex' command: Encountered the following error while compiling the regex '(?<bar>.){3120}': Regex: regular expression is too large. Repeating the match 3,120 times exceeds PCRE2's compile-time limit. If we add a second character to the pattern, we'll exceed the limit in fewer repetitions: Good: | rex field=foo "(?<bar>..){2339}" Bad: | rex field=foo "(?<bar>..){2340}" Error in 'rex' command: Encountered the following error while compiling the regex '(?<bar>..){2340}': Regex: regular expression is too large. The error message should contain a pattern offset to help with identification of the error; however, Splunk does not expose that, and enabling DEBUG output on SearchOperator:rex adds no other information. In short, the code generated by the regular expression compiler is too long, and you'll need to modify your regular expression. With respect to CSV lookups versus KV store lookups, test, test, and test again. A CSV file deserialized to an optimized data structure in memory should have a search complexity similar to a MongoDB data store, but because the entire structure is in memory, the CSV file may outperform a similarly configured KV store lookup. If you want to replicate your lookup to your indexing tier as part of your search bundle, you also need to consider that a KV store lookup will be serialized to a CSV file and used as a CSV file on the indexer. Finally, if you're using Splunk Enterprise Security, consider integrating your threat intelligence with the Splunk Enterprise Security threat intelligence framework. The latter may not meet 100% of your requirements, so as before, test, test, and test again.
Hi @jhooper33 , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all t... See more...
Hi @jhooper33 , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
@PickleRick!!! Thank you for the suggestion. I haven't seen anyone use KVstore for lookup for any of our searches yet so I will definitely look into this.