All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey Splunksters, I have a scripted input (powershell) that outputs correctly 6 fields on the screen like this: expiration_date          user       login                            cardRequired ... See more...
Hey Splunksters, I have a scripted input (powershell) that outputs correctly 6 fields on the screen like this: expiration_date          user       login                            cardRequired          location     account_last_changed mm/dd/yy 15:03        joblo      some_stats              true/false                  blah             mm/dd/yy 16:06 However, when splunk ingests these fields, it is cutting off the last one (account_last_changed) in the _raw Anybody know why?  Tried setting TRUNCATE=0 and TRUNCATE=500000   etc etc in the props.conf Cannot for the life of me get that last field to show up in Splunk I also thought that perhaps it was treating that last field like a new time_stamp ans was thus getting confused and cutting it off.  However, I tried moving the field closer to the first time_timestamp in the actual script (got all six fields to output correctly to the screen) but Splunk was still cutting it off. Any help is much appreciated! Thanks!
Hi, Having major issues with Perfmon collection. Values collected for "% Processor Time" (as well as privileged and user) sometimes contain invalid information. Just monitoring a single 6 vCPU mach... See more...
Hi, Having major issues with Perfmon collection. Values collected for "% Processor Time" (as well as privileged and user) sometimes contain invalid information. Just monitoring a single 6 vCPU machine. While for some processes the CPU usage is correctly returned as percentage value between 0 and 600, other processes every few minutes return values that are way above charts like a few thousand to for ex. like 1.5 Mio. I cannot see any of those numbers while running perfmon. The process IDs of those process also don't change. Regards        
Hello,  I'm working on a really complex search where I need to combine results from different lookup tables. One lookup table is really big with multiple million entries, while the other one is quit... See more...
Hello,  I'm working on a really complex search where I need to combine results from different lookup tables. One lookup table is really big with multiple million entries, while the other one is quite small with only a thousand entries.  Both tables have one common field, let's call it "office". The big tables has entries for task which are applied to a certain office. The other table has more information about the office.  Some example data for the task lookup: office city country importance xxx madrid spain very important yyy paris france important   Office table looks similar to this: office group name xxx this aaa yyy that bbb   I want to add the group and name fields to the first task table, without loosing any entries from the task table, so I can continue working with it. I've tried a lot of different approaches but none of them work. I got the best results with this search, but it's still not the outcome I want:     | inputlookup task_lookup | eval importance_very_important=if(match(importance, "very important"), 1, 0), importance_important=if(match(importance, "important"), 1, 0), importance_less_important=if(match(importance, "less important"), 1, 0) | eval source="task" | append [| inputlookup office_lookup | eval source="office"] | stats values(source) as source, values(country) as country, values(city) as city, sum(importance_*) as *, values(group) as group, values(name) as name by office | where mvcount(source)=2     This search gives me the right combination of fields BUT it also combines the different cities and countries, which I don't want, since I need them seperated so I can filter them. I get the following outcome (e.g.): office country city name group very_important important less_important xxx madrid paris spain france italy aaa this 3 7 8 yyy rome paris france spain bbb that 5 3 4   So all in all I need a result table that doesn't combine any values so I can work with them seperately. I'm at a point where I have no clue how to accomplish this, so any help would be highly appreciated!   Additional info: I don't want to use join since the first lookup has so many entries, I don't thinks that's going to work. I also can't just use mvexpand, since it doesn't properly expand the counts for the different task counts with their importance.
Hi all, I'm trying to dynamically add columns to two fixed columns based on the environment value selected. For instance, this is the input data: Environment Application CatridgeType Cartridg... See more...
Hi all, I'm trying to dynamically add columns to two fixed columns based on the environment value selected. For instance, this is the input data: Environment Application CatridgeType Cartridge Version DEV A-1 User Alpha 1.1 DEV A-2 Product Beta 1.2 UAT A-1 User Alpha 1.2 SVP A-1 User Alpha 1.4 SVP A-1 User Sigma 1.5 SVP A-2 Product Beta 1.2 SVP A-3 System Gamma 1.5   And I would like to create a table such as the following:  CartridgeType Cartridge DEV:A-1 DEV:A-2 User Alpha 1.1   Product Beta   1.2 Some key things to note: - The first two columns should stay constant, however depending on the environment value selected in the search (e.g. Environment="DEV"), the environment value should be combined with the 'Application' value to create another column, in which the values are the corresponding 'Version' value. The tricky party is making the fields after "Cartridge" dynamic, for instance, if Environment="SVP",  I would expect the following: CartridgeType Cartridge SVP:A-1 SVP:A-2 SVP:A-3 User Alpha 1.4     User Sigma 1.5     Product Beta   1.2   System Gamma     1.5   Is this possible to do whilst making to only show the latest version value?   Thank you so much for any help!
Hi, after the installation of ITE Works 4.9.2 and the exchange content pack. I checked all the dashboards to be sure the data was correctly processed and I realized that some panels were blank. One... See more...
Hi, after the installation of ITE Works 4.9.2 and the exchange content pack. I checked all the dashboards to be sure the data was correctly processed and I realized that some panels were blank. One of them, Inbound Messages - Microsoft Exchange, the panel related to the inbound message volume is empty. looking into the search,    `msgtrack-inbound-messages`|eval total_kb=total_bytes/1024|timechart fixedrange=t bins=120 per_second(total_kb) as "Bandwidth"   I realized that the first macro does not return a column total_bytes so the eval cannot create the new field total_kb so the timechart can not visualize anything. is there some configuration missing on my side or is it a known bug of the content pack? Cheers  
Hi In my search table are some multible events with one timestamp. I need to split them. Does somebody has any idea? Thanks in advance for your help
Hi, I have one index(test0) in a standalone server. I'm trying to make 3-month data searchable after 6 months of data to Archive And after 12 months of data to delete/retention Below is my con... See more...
Hi, I have one index(test0) in a standalone server. I'm trying to make 3-month data searchable after 6 months of data to Archive And after 12 months of data to delete/retention Below is my config [test0] coldPath = $SPLUNK_DB/test0/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/test0/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/test0/thaweddb maxDataSize = 750 maxWarmDBCount = 500 frozenTimePeriodInSecs = 31556926 coldToFrozenDir = /opt/backup/index Is above configuration is correct ? 
Hi all, have been using the splunklib package in Python to connect to the Splunk API for some time now, and it works fine. As sample search I use is provided below: searchquery = """search index=wi... See more...
Hi all, have been using the splunklib package in Python to connect to the Splunk API for some time now, and it works fine. As sample search I use is provided below: searchquery = """search index=wineventlog EventCode=4688 earliest=-4h | fields user, ETC, ETC, ETC | table user, ETC, ETC, ETC""" resolveQuery = SplunkQuery(host, port, username, password) df = resolveQuery.splunk_fetch(searchquery) The search return a pandas dataframe (in Python) containing the required information. When I try to retrieve an inputlookup however, the search doesn't return any information, only an empty dataframe.  Below is an example of a searchquery I use to try and retrieve an inputlookup: searchquery = """search | inputlookup infomation.csv""" Any help would be highly appreciated: how can I retrieve inputlookups using the Splunklib package in Python?
Wondered if someone can assist me, we're trying to send some log files from AWS in JSON format, coming over as an event. ive copied the log into a text file, gone ADD DATA and initially it fails but ... See more...
Wondered if someone can assist me, we're trying to send some log files from AWS in JSON format, coming over as an event. ive copied the log into a text file, gone ADD DATA and initially it fails but then changing sourcetype to _json it formats it fine. However when trying to send the data in properly, i just get a parsing error, is there an easy way to identify whats causing this? the format is as follows.   { "time": "1628855079519", "host": "sgw-3451B77A", "source": "share-114D5B31", "sourcetype": "aws:storagegateway", "sourceAddress": "xx.xx.xx.xx", "accountDomain": "XXX", "accountName": "server_name", "type": "FileSystemAudit", "version": "1.0", "objectType": "File", "bucket": "test-test-test", "objectName": "/random-210813-1230.toSend", "shareName": "test-test-test", "operation": "ReadData", "timestamp": "1333222111111", "gateway": "aaa-XXXXXXA", "status": "Success" }
So I need to run search on a firewall index where I need to look for field values matching from two lookup files, one is src.csv and dst_withsubnets.csv and output corresponding fields Test SPL from... See more...
So I need to run search on a firewall index where I need to look for field values matching from two lookup files, one is src.csv and dst_withsubnets.csv and output corresponding fields Test SPL from my lab | makeresults |eval src_ip="1.1.1.1", src_translated_ip="3.3.3.3", dest_ip="192.168.1.1", dest_port=443, action="drop" | join src_ip [| inputlookup src.csv | rename src AS src_ip] | join dest_ip [| inputlookup dst_withsubnets.csv | rename dst AS dest_ip ] | table _time, src_ip, src_translated_ip, dest_ip, dest_port, action src.csv 1.1.1.1 dst_withsubnets.csv   dst 192.168.1.0/24   As you can notice, the SPL is searching for dest_ip in a lookup that only has destination subnets. To make it work, I have also added following transforms.conf [dst_withsubnets] filename = dst_withsubnets.csv match_type = CIDR(dst) max_matches = 1   However, its still not working
Need to trigger an alert when a process id is not running, here my query  index=os  source=ps   sourcetype=ps  host=gk2406  process=ora_d4   process_id=5955  
Hello, I wrote a PROPS Configuration file for following csv file but getting error message. Any help will be highly appreciated. Thank you so much.       [ csv ] SHOULD_LINEMERGE=false CHA... See more...
Hello, I wrote a PROPS Configuration file for following csv file but getting error message. Any help will be highly appreciated. Thank you so much.       [ csv ] SHOULD_LINEMERGE=false CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv TIME_FORMAT=%Y%m%d %H:%M:%S:%Q HEADER_FIELD)LINE_NUMBER=1 TIMESTAMP_FIELDS=TIMESTAMP category=Structured    
Hello Splunk community, When trying to splice multiple events so that it can generate a specific output from a Splunk index, I’ve been running into the “Regex: syntax error in subpattern name (missi... See more...
Hello Splunk community, When trying to splice multiple events so that it can generate a specific output from a Splunk index, I’ve been running into the “Regex: syntax error in subpattern name (missing terminator)” error often. ========================================================================= For example, there are events that are being shown in a Splunk index: (each line is a different Splunk event)   “This is one way to do everything” “Regular Expressions in Splunk” “test: 123fourfive” “and escape characters” “test: !A-Z” “are an interesting exercise in” “test: ~Lettersand Numbers” “finding out how Regex works” “test: What is the? AndWhen to use it!” “in Splunk.” “test:   This is the Splunk query :   *randomsplunkindex*|rex field=_raw “(?<OUTPUT>(?<=” “).*(?=” “test:))”   I’m trying to get the output between two the quotes. So that the output would be:   Regular Expressions in Splunk and escape characters are an interesting exercise in finding out how Regex works in splunk.   However I’ve run into this error “Regex: syntax error in subpattern name (missing terminator)” I’ve tried these combinations of exit characters so that I won’t get the “Regex: syntax error in subpattern name (missing terminator)” error:   *randomsplunkindex*|rex field=_raw "(?<OUTPUT>(?<=\" \").*(?=\" \"test:))" *randomsplunkindex*|rex field=_raw "(?<OUTPUT>(?<=\" ").*(?=" \"test:))" *randomsplunkindex*|rex field=_raw "(?<OUTPUT>(?<=\" \").*(?=" "test:))" *randomsplunkindex*|rex field=_raw "(?<OUTPUT>(?<=" ").*(?=\" \"test:))" *randomsplunkindex*|rex field=_raw "(?<OUTPUT>\(?<=" ").*(?=" "test:)\)"   Is there any way to use regular expressions so that if there are characters like “or ‘ in said event so that you’re trying to extract the output using rex?
Hello, how i can change profile email? I have no any of subscribe, so i cant create a case for support. Thanks in advance.
Hi How can create issue (on demand) in my "issue tracker" from splunk? e.g I search through the logs suddenly found two events that need work on it, then hit bottom on splunk it will automatcally cr... See more...
Hi How can create issue (on demand) in my "issue tracker" from splunk? e.g I search through the logs suddenly found two events that need work on it, then hit bottom on splunk it will automatcally create issue and attach that events to this issue on my issue tracker.   FYI: I know alert will be do this but alert is autmatic process I need on demand.   Any idea? Thanks
Dears, Hope you're doing great. Please note that the Splunk indexer is not booting but other servers is booting and working (All servers is Centos 7), please seek your help. PS: we have made an up... See more...
Dears, Hope you're doing great. Please note that the Splunk indexer is not booting but other servers is booting and working (All servers is Centos 7), please seek your help. PS: we have made an update on the Vcenter domain admin user (Changes on the Domain Admin Name). Kindly please help, Best Regards, Yousef H.
Hi Team, I have a situation, where I want my team to have power user access in production (for creating ko) but with no write access to KO's whose owner is Nobody..   Only  one user to have the w... See more...
Hi Team, I have a situation, where I want my team to have power user access in production (for creating ko) but with no write access to KO's whose owner is Nobody..   Only  one user to have the write capability .   Is there any way I can achieve this view configuration and I do not want to create a separate role for only one user who will have write access..    
Hi I have compress file that contain several files. in source just show compress file. e.g compress files name is log.bz2,  it contain log1 log2 log3   currently in source just show log.bz2 , how ... See more...
Hi I have compress file that contain several files. in source just show compress file. e.g compress files name is log.bz2,  it contain log1 log2 log3   currently in source just show log.bz2 , how can I find which event belong to which file? something like this  log.bz2 > log2 Any idea? thanks
i'm having trouble indexing and monitoring the alerts.log file from ossec. ive tried manually adding in "/var/ossec/alerts/alerts.log" to the data inputs with source type automatic and index default ... See more...
i'm having trouble indexing and monitoring the alerts.log file from ossec. ive tried manually adding in "/var/ossec/alerts/alerts.log" to the data inputs with source type automatic and index default but with no luck as well. when i try to search in the default search and reporting app, no alerts show up, and when i use the Reporting and Management app for OSSEC this error shows up. ive tried rebuilding the lookup table as well but no luck. attached are screenshots showing the file data inputs and the result from regenerating the lookup table. if anyone has any idea on how to properly setup the app please let me know. thanks
Splunk 8's HEC defaults to TLSv1.2 only.  But I have a need to allow TLSv1.1 with AES256-SHA in order for puppetserver 2.7.0 to connect. So far, I figured that in order to effect HEC protocols, I mu... See more...
Splunk 8's HEC defaults to TLSv1.2 only.  But I have a need to allow TLSv1.1 with AES256-SHA in order for puppetserver 2.7.0 to connect. So far, I figured that in order to effect HEC protocols, I must also alter $SPLUNKE_HOME/etc/system/local/web.conf.  So I changed sslVersion to *, and made sure that AES256-SHA is in cipherSuite.  I can verify that TLSv1.1 is allowed when using openssl command line to connect; the same code in Puppet's splunk_hec reporter is also able to connect via TLSv1.1 when invoked from native Ruby (Ruby 2.0).  But I cannot externally examine the exact cipher used even with Wireshark. Anyway, even with this setup on Splunk's side, I still get "ssl3_get_client_hello:no shared cipher" when puppetserver tries to connect.  The difference is that puppetserver 2.7.0 runs in outdated JRuby that uses Ruby 1.9.  Nevertheless, https://ask.puppet.com/question/33316/puppet-https-connection-using-latest-tls-version-and-cipher-suites/ states "the only way to get puppet to successfully connect is to enable the AES256-SHA cipher."  So, I would expect the combination to be successful. What other things do I need to change?