All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

thanks @gcusello - only reason to refresh panels every 15 seconds is to get the results from last executed report run in 15 seconds, rather than to wait for a minute to see the latest results on dash... See more...
thanks @gcusello - only reason to refresh panels every 15 seconds is to get the results from last executed report run in 15 seconds, rather than to wait for a minute to see the latest results on dashboard once report run is completed. Thank you for your help on this. 
Sending logs from Universal Forwarders to Heavy Forwarders is like passing along important messages from one person to another in a relay. Here's a simple way to understand it: Imagine Passing Not... See more...
Sending logs from Universal Forwarders to Heavy Forwarders is like passing along important messages from one person to another in a relay. Here's a simple way to understand it: Imagine Passing Notes: Think of Universal Forwarders as individuals who have notes (logs) with important information. Heavy Forwarders are the ones ready to collect and manage these notes. Universal Forwarders (Note Holders): Universal Forwarders are like people holding notes (logs) and standing in a line. They generate logs from different sources on a computer. Heavy Forwarders (Note Collectors): Heavy Forwarders are the ones waiting at the end of the line to collect these notes (logs) from the Universal Forwarders. Setting Up the Relay: You set up a system where each person (Universal Forwarder) in the line passes their note (log) to the next person (Heavy Forwarder) until it reaches the end. Configuring Universal Forwarders: On each computer with a Universal Forwarder, you configure it to know where the next person (Heavy Forwarder) is in line. This is like telling each note holder where to pass their note. Logs Move Down the Line: As logs are generated, they move down the line from Universal Forwarder to Universal Forwarder until they reach the Heavy Forwarder. Heavy Forwarder Collects and Manages: The Heavy Forwarder collects all the notes (logs) from different Universal Forwarders. It's like the person at the end of the line collecting all the notes to manage and make sense of them. Centralized Log Management: Now, all the important information is centralized on the Heavy Forwarder, making it easier to analyze and keep track of everything in one place. In technical terms, configuring Universal Forwarders to send logs to Heavy Forwarders involves setting up these systems to efficiently collect and manage logs from different sources across a network. It's like orchestrating a relay of information to ensure that important data reaches its destination for centralized management and analysis.
Thank you for your quick response!   I gave to my account these roles (admin and user) and restarted Splunk Enterprise but there are still same error when I'm installing Python for Scientific ... See more...
Thank you for your quick response!   I gave to my account these roles (admin and user) and restarted Splunk Enterprise but there are still same error when I'm installing Python for Scientific Computing Application...   and there are no such file  C:\Program Files\Splunk\var\run\7514f26da673bbe6.tar.gz  in  C:\Program Files\Splunk\var\run      
Hi  I'm using the Splunk App for Lookup File Editing.
Hi  I'm using the Splunk App for Lookup File Editing.
If I Import... do you import thru zip file extract?.. if so, it will give you a warning msg about file already exist.  are you importing by outputlookup command? Pls give us more details, thanks. 
C:\Program Files\Splunk\var\run\7514f26da673bbe6.tar.gz may we know if this file is available, with read permission to the Splunk User? looks like the Splunk user got no read-write permission to Sp... See more...
C:\Program Files\Splunk\var\run\7514f26da673bbe6.tar.gz may we know if this file is available, with read permission to the Splunk User? looks like the Splunk user got no read-write permission to Splunk home directory.. pls verify it, thanks.  
I am installing Python for Scientific Computing AddOn application but there is an error like this :    Error during app install: failed to extract app from C:\Program Files\Splunk\var\run\7514f26da... See more...
I am installing Python for Scientific Computing AddOn application but there is an error like this :    Error during app install: failed to extract app from C:\Program Files\Splunk\var\run\7514f26da673bbe6.tar.gz to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\df9ffe7f1b8aef48: The system cannot find tthe path specified.   what should I do to solve this problem?  
Dear splunkers: @PickleRick  @richgalloway @isoutamo  can you give me some suggestions to help me solve my problem? Thank you!
Any suggestions?
If I Import into an existing lookup.csv will the current contents be overwritten?
Can you provide me with step-by-step instructions on how to ingest Cayosoft Administrator logs into Splunk? 
>>>As our programming skills are non-existent (but we can follow instructions well), we wonder if it's >>>possible to set up this workflow and what possibilities exist related to it all. We find it c... See more...
>>>As our programming skills are non-existent (but we can follow instructions well), we wonder if it's >>>possible to set up this workflow and what possibilities exist related to it all. We find it challenging to >>>comprehend the possibilities available. >>>Is it simple for non-programmers to follow, or does it require a robust infrastructure or extra investment >>>for certain use cases (perhaps payment to Twitter)? >>>Some user cases might be achievable through multiple other easy methods? 1) yep, Splunk does not require programming skills at all (95% of the Splunk tasks can be done without programming). 2) Splunk is a very simple, easy to use tool.. best one for non-programmers, IT managers and Decision makers. that is why Splunk is the market leader for a decade.  3) for twitter hashtag sentiment analysis, you dont need any extra payment or anything. for Proof of a Concept, you can use the free splunk. For production environment, you can get a Splunk License, which is also very cheap.  >>>While our use case is culturally important. As hastags are not so big, it likely generates a small amount of >>>results. However, we'd like to understand the potential costs if the hashtag were to trend, resulting in >>>thousands of daily results (e.g., 5% with images). What would be the approximate cost for saving 5000+ >>>results per day/month? What is the limits here? this task looks like a simple task, Splunk can handle very easily.  >>> Is Splunk the right tool for this kind of work? Are there pre-made templates or samples/posts to guide >>> this type of work? AI directed us to this software when we sought help with saving hashtags. Splunk is the best tool for these kind of tasks. there are hundreds of pre-made templates (dashboards). the AI's suggestion of using Splunk... you can understand how right Splunk for this task.  >>> Are there any requirements for the Twitter API? the twitter to Splunk integration is the bit troublesome task in this project. There were great Splunk apps for tweets integration to Splunk(for example - https://splunkbase.splunk.com/app/1162 , but recently(around one or two years), these apps got some troubles and they are not working(i have tested this around 2 months ago). Pls check this app - https://splunkbase.splunk.com/app/4619  all looks good with this app.. let me test this one and update you back.  Questions - 1) may i know how you have the tweets and hashtags details...  I assume you have the tweets and hashtags as an excel sheet. do you have this excel sheet or you dont have it yet.  2) the tweets should be sent to Splunk automatically or someone will download the tweets and upload it to Splunk manually  
Just make status indicate success of fail and then do this ... | fillnull validate_by value=null | eval status = case(valid_by="Invokema", 1,valid_by="null", 0, true(), -1) | stats count by status
Greetings Splunk's! My use case is quite straightforward: We aim to save and monitor (secondarily) some rare hashtags from the Twitter platform (unsure if it's possible on Facebook and other platfo... See more...
Greetings Splunk's! My use case is quite straightforward: We aim to save and monitor (secondarily) some rare hashtags from the Twitter platform (unsure if it's possible on Facebook and other platforms). We would like hashtags related posts to be chronologically arranged from the beginning to end, viewable in HTML format, and exportable to Excel (at minimum).  Additionally, if possible: 2. It would be ideal to see statistics on which accounts/dates used the hashtags the most. Further possibilities include analyzing post comments, likes, and the followers/following of the posters—if feasible, in the future. We've downloaded the following software for our Linux platform: https://download.splunk.com/products/splunk/releases/9.1.2/linux/splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb As our programming skills are non-existent (but we can follow instructions well), we wonder if it's possible to set up this workflow and what possibilities exist related to it all. We find it challenging to comprehend the possibilities available. Is it simple for non-programmers to follow, or does it require a robust infrastructure or extra investment for certain use cases (perhaps payment to Twitter)? Some user cases might be achievable through multiple other easy methods? We'd like the workflow to be more of a "set and forget" with the ability to easily change the hashtag/search word in the future if possible. We are currently in a 14/60 day trial (unsure if it's 60 or 14 days and what is included in that) and would like to know the future cost after the free usage ends. While our use case is culturally important. As hastags are not so big, it likely generates a small amount of results. However, we'd like to understand the potential costs if the hashtag were to trend, resulting in thousands of daily results (e.g., 5% with images). What would be the approximate cost for saving 5000+ results per day/month? What is the limits here? Is Splunk the right tool for this kind of work? Are there pre-made templates or samples/posts to guide this type of work? AI directed us to this software when we sought help with saving hashtags. Please feel free to provide thorough explanations for any part of our questions and include links to samples. We are comfortable copying and pasting and editing scripts but cannot follow programming logic ourselves. Are there any requirements for the Twitter API? Such as a separate account login to avoid posing a threat to our main account (as it seems X does not allow browsing the site without logging in anymore). If there are sample user cases related to my needs and what is possible, I would also like to see screens or videos about it Thank you for all the help and patience. splunk@arvutiministeerum.ee - Extra information can be sent to our email as well.  // Is 'Knowledge Management' the correct spot for that post? Feel free to transfer it if there is a more appropriate sub-forum. // Margus
You need more than a CSV file to monitor the users.  Presuming you are logging user login failures in Splunk, you can filter them using the CSV file like this. index=foo [ | inputlookup mylookup.csv... See more...
You need more than a CSV file to monitor the users.  Presuming you are logging user login failures in Splunk, you can filter them using the CSV file like this. index=foo [ | inputlookup mylookup.csv | fields <<user name field>> | rename <<user name field>> AS <<indexed user name field>> ]
I have a csv file with the user list and I want to create an alert to monitor the user login failure alert from the user list. How do I use the lookup file, can you please let me know?
I got some data when I started up the application, but not since then. There are about 60 users, so I should be seeing something. Any ideas on what I can do to get it to collect data? https://imgur... See more...
I got some data when I started up the application, but not since then. There are about 60 users, so I should be seeing something. Any ideas on what I can do to get it to collect data? https://imgur.com/a/S2s4Cgr It's been running for probably a couple hours now.
At the risk of spoiling @mnj1809's homework, I would condense this to a single eval statement: | eval ip=mvmap(mvsort(mvmap(ip, mvjoin(mvmap(split(ip, "."), substr("00".ip, -3)), "."))), mvjoin(mvma... See more...
At the risk of spoiling @mnj1809's homework, I would condense this to a single eval statement: | eval ip=mvmap(mvsort(mvmap(ip, mvjoin(mvmap(split(ip, "."), substr("00".ip, -3)), "."))), mvjoin(mvmap(split(ip, "."), coalesce(nullif(ltrim(ip, "0"), ""), "0")), "."))  However, understanding nested mvmap magic is also homework.
Hi @gcusello, Have you considered creating one new local tcp (or udp) input and corresponding syslog output per source and using _SYSLOG_ROUTING in a transform to send the events to the new local in... See more...
Hi @gcusello, Have you considered creating one new local tcp (or udp) input and corresponding syslog output per source and using _SYSLOG_ROUTING in a transform to send the events to the new local inputs for re-parsing? The new inputs would have a default sourcetype setting defined for the source. Splunk's syslog outputs have hard-coded queue maxSize values, so you'll need to increase parallelIngestionPipeline to scale if needed. +-- type1 --- outputA -------+ | | syslog --> input1 --+-- type2 --- outputB ----+ | | | | +-- type3 --- outputC -+ | | | | | +---------------------------------------+ | | | +---------------------------------------+ | | | +---------------------------------------+ | | | | | +-> inputA --- - - - | | | +----> inputB --- - - - | +-------> inputC --- - - - You may also be able to do the same with an additional local splunktcp input, output group, _TCP_ROUTING, and a modified default route to re-parse cooked data, but I'd be wary of unintentional feedback loops.