All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I am trying to create a Transaction where my starting and ending 'event' are not always showing the correct overview.  I expect the yellow marked group events as result:       inde... See more...
Hi, I am trying to create a Transaction where my starting and ending 'event' are not always showing the correct overview.  I expect the yellow marked group events as result:       index=app sourcetype=prd_wcs host=EULMFCP1WVND121 "EquipmentStatusRequest\"=" D0022 | eval _raw = replace(_raw, "\\\\", "") | eval _raw = replace(_raw, "\"", "") | rex "Chute:DTT_S01.DA01.(?<Door>[^\,]+)" | rex "EquipmentName:DTT_S01.DA01.(?<EquipmentName>[^\,]+)" | rex "EquipmentType:(?<EquipmentType>[^\,]+)" | rex "Status:(?<EquipmentStatus>[^\,]+)" | rex "TypeOfMessage:(?<TypeOfMessage>[^\}]+)" | eval Code = EquipmentStatus+"-"+TypeOfMessage+"-"+EquipmentType | lookup Cortez_SS_Reasons.csv CODE as Code output STATE as ReasonCode | where ReasonCode = "Ready" OR ReasonCode = "Full" | transaction EquipmentName startswith=(ReasonCode="Full") endswith=(ReasonCode="Ready") | eval latestTS = _time + duration | eval counter=1 | accum counter as Row | table _time latestTS Row ReasonCode | eval latestTS=strftime(latestTS,"%Y-%m-%d %H:%M:%S.%3N")   The script above is showing the following overview as result and the marked line is not correct. I don't know how this is happened. Because, I expect that Transaction function will always take first events starting with "Ready" and ending with "Full"..  Thanks in advance.  
Hi @sbhatnagar88 , in this case you have in the same file system Splunk and its data. Usually Splunk data are stored in a different file system in a different mount point. In your case I hint to m... See more...
Hi @sbhatnagar88 , in this case you have in the same file system Splunk and its data. Usually Splunk data are stored in a different file system in a different mount point. In your case I hint to migrate the installation as it is; then you ll be able to plan to move the data (indexes in a different file system), I don't hint to do this in one step. In other words the best practice is to have in different file systems: / and the operative system, /var, Splunk, Splunk data. Ciao. Giuseppe  
Hi @gcusello ,   In our case: splunk home is /splunk Splunk DB is /splunk/var/lib/splunk Thanks
I have a JSON data like this.   "suite":[{"hostname":"localhost","failures":0,"package":"ABC","tests":0,"name":"ABC_test","id":0,"time":0,"errors":0,"testcase":[{"classname":"xyz","name":"foo1","ti... See more...
I have a JSON data like this.   "suite":[{"hostname":"localhost","failures":0,"package":"ABC","tests":0,"name":"ABC_test","id":0,"time":0,"errors":0,"testcase":[{"classname":"xyz","name":"foo1","time":0,"status":"Passed"},{"classname":"pqr","name":"foo2)","time":0,"status":"Passed"},....   I want to create a table with Suite testcase_name and Testcase_status as columns. I have a solution using mvexpand command. But when there is large data output gets truncated using mvexpand command.   ....| spath output=suite path=suite{}.name | spath output=Testcase path=suite{}.testcase{}.name| spath output=Error path=suite{}.testcase{}.error | spath output=Status path=suite{}.testcase{}.status|search (suite="*") | eval x=mvzip(Testcase,Status) | mvexpand x|eval y=split(x,",")|eval Testcase=mvindex(y,0) | search Testcase IN ("***") | eval suite=mvdedup(suite) |eval Status=mvindex(y,1) |table "Suite" "TestCase" Status   This is the query im using. But the results gets truncated. Is there any alternative for mvexpand so that i can edit the above query ?
I am trying to pull the datasets from Splunk into my Power bi desktop to analyze it. yeah I am fetching it for global search.
Hi @inventsekar  Error is faced with the first rex command.   
Hi @sbhatnagar88 , the mount point is relevant to be sure that the indexes.conf files and splunk-launch.conf files point to the correct mount points (the same of the old installation). So, in thi c... See more...
Hi @sbhatnagar88 , the mount point is relevant to be sure that the indexes.conf files and splunk-launch.conf files point to the correct mount points (the same of the old installation). So, in thi case you can restore the old $SPLUNK_HOME folder and your installation will run exactly as before. It's usual to have different file systems between system application (splunk) and data, what is your situation? what are the $SPLUNK_HOME and $SPLUNK_DB folders' Ciao. Giuseppe
As I said - you have to escape the sensitive characters within the string argument. Which means that instead of single backslash you have to use two backslashes, instead of just a quote you need an e... See more...
As I said - you have to escape the sensitive characters within the string argument. Which means that instead of single backslash you have to use two backslashes, instead of just a quote you need an escaped quote (backslash quote).
And what are you "pulling" from Splunk? Honest question, I have no idea how Splunk ODBC driver works - do you define a search being your data source globally or do you define one every time you call ... See more...
And what are you "pulling" from Splunk? Honest question, I have no idea how Splunk ODBC driver works - do you define a search being your data source globally or do you define one every time you call that data source?
Hi @Real_captain , troubleshooting a rex command is often a difficult task.  Particularly when we dont know what is the issue itself.  to understand the error msg (search command required before "^... See more...
Hi @Real_captain , troubleshooting a rex command is often a difficult task.  Particularly when we dont know what is the issue itself.  to understand the error msg (search command required before "^"...), if you could copy paste a sample log line, that would be great (remove sensitive details like hostnames, ip address, etc).  maybe try this step by step troubleshooting..  first this rex command: | rex "(?P<POH>[^"]+)" | table POH then second this rex command: | rex "\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)" | table POH at last, this rex command: | rex "^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)" | table POH  
Hi @gcusello ,   Thanks for the feedback. we are planning to keep exactly same mount points. 1. In that case, if we take backup of /splunk directory and restore it after new OS is build. will that... See more...
Hi @gcusello ,   Thanks for the feedback. we are planning to keep exactly same mount points. 1. In that case, if we take backup of /splunk directory and restore it after new OS is build. will that restore all configuration and data as the original one? - pls answer. 2. Also, we planned to separate data and OS disk and only format the OS disk and once new OS is configured, restore the data disk. Do you think this approach will work? - pls answer   Thanks Sushant
@PickleRick   I am getting below error while using the expression with the rex command:  | rex "^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)"   Error in 'SearchParser':... See more...
@PickleRick   I am getting below error while using the expression with the rex command:  | rex "^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)"   Error in 'SearchParser': Missing a search command before '^'. Error at position '161' of search query 'search index="events_prod_val_ssip_esa" sourcetype...{snipped} {errorcontext = "(?P<POH>[^"]+)"}'.
I am trying to get the data from Splunk into PowerBI. For that, I made a connection between Splunk and Power BI through the Splunk ODBC driver.
@inventsekar This one is actually a bit different from those two yesterday's threads I merged into one. @Real_captainInline extractions must use named capture groups which directly translate to extr... See more...
@inventsekar This one is actually a bit different from those two yesterday's threads I merged into one. @Real_captainInline extractions must use named capture groups which directly translate to extracted fields (with transform-based extractions you can use numbered capture groups to define fields). So you can simply do | rex "your_regex_here" With just one caveat. Since the argument to rex command is a string you have to properly escape all necessary characters (mostly quotes and backslashes).
Hi @Real_captain, could you pls avoid creating duplicate posts on your yesterday's post, could you pls provide us some more suggestions, details.. then troubleshooting your issue will become easier.... See more...
Hi @Real_captain, could you pls avoid creating duplicate posts on your yesterday's post, could you pls provide us some more suggestions, details.. then troubleshooting your issue will become easier. thanks. 
Hi @emmanuelkatto23  >>> Are there particular search query optimizations I should consider to speed up the execution time, especially with complex queries? On the DMC Distributed Management Console... See more...
Hi @emmanuelkatto23  >>> Are there particular search query optimizations I should consider to speed up the execution time, especially with complex queries? On the DMC Distributed Management Console, you can find many dashboard(s)/Panels... using these  you can find which Searches took long time to run, which searches took more resources, etc. There are many other considerations like avoiding the join's etc.  1) Pls suggest us which Splunk Apps you use, 2) which Splunk things(user searches, reports or alerts or dashboards) are taking high Splunk performance..  then you can start fine-tuning one by one, thanks. 
OK. First things first. What did you do to "connect Splunk with Power BI"? Are you ingesting data from Power BI into Splunk? (how?) or are you getting the data from Splunk into PowerBI? (again - how... See more...
OK. First things first. What did you do to "connect Splunk with Power BI"? Are you ingesting data from Power BI into Splunk? (how?) or are you getting the data from Splunk into PowerBI? (again - how?).
Oh, mate... You're trying to tackle several years of experience with a quick forum post. Optimizing searches (just like any programming optimizations) is partly science, partly art. You need to un... See more...
Oh, mate... You're trying to tackle several years of experience with a quick forum post. Optimizing searches (just like any programming optimizations) is partly science, partly art. You need to understand how Splunk works - how it breaks an event into single terms, how it stores those terms in indexes, how it searches for data, especially in distributed architecture, know the different command types and understand how they impact your search processing, undertand what the datamodels are and what they aren't (and what accelerated datamodel means; datamodel acceleration is not the same as datamodel itself), how accelerations work. It's not straightforward but it's not impossible of course. One thing - datamodel on its own doesn't accelerate searching - it can accelerate writing searches because the datamodel definition separates your high-level search from actual low-level details of your data. You don't have to - for example - care whether your firewall produces logs with a source IP field called src_ip, src, source_ip or whatever the developers wanted. If your logs were made CIM-compliant by relevant add-on, you can just search from Network_Traffic datamodel using the src_ip field. And that's it. Datamodel on its own doesn't give you more than that. But if you enable datamodel acceleration, Splunk periodically searches through your data covered by particular datamodel and builds a pre-indexed summary which you can search faster than raw data from underlying datamodel.  
Hi Team  Can you please let me know how can i use the below Field extraction formula directly using the rex command ?  Field extraction formula :  ^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?... See more...
Hi Team  Can you please let me know how can i use the below Field extraction formula directly using the rex command ?  Field extraction formula :  ^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)      
Wait a second. Firstly you've been asking about "sorting" json fields on output of the search (at least that's how I understood your question). Now you're saying you want to modify _raw event. By "mo... See more...
Wait a second. Firstly you've been asking about "sorting" json fields on output of the search (at least that's how I understood your question). Now you're saying you want to modify _raw event. By "modifying" I understand that you want to do it before the event is written to an index. Manipulating structured data with just regexes is not a very good idea (maybe except for very easy cases but even then I'd be very careful).