All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello Could you please reply to my query as mentioned previously?  
The question has been answered many times before. @isoutamo already pointed you to a trove of resources for writing such search. If you don't understand some specifics about any of those things said ... See more...
The question has been answered many times before. @isoutamo already pointed you to a trove of resources for writing such search. If you don't understand some specifics about any of those things said in other threads wiith solutions don't hesitate to ask for explanation. But don't expect people to jump in and do your job for you - the issue is well known and has well known method of dealing with, explained many times. So all you need is to dig into those resources, read the solutions provided there and try to construct your own. If you encounter some obstacles along the way, ask away.
I did try to find in documentation that summaryindex is an alias for collect, but it's not documented as far as I can see.  But if you start typing | summaryin , splunk will show info for collect com... See more...
I did try to find in documentation that summaryindex is an alias for collect, but it's not documented as far as I can see.  But if you start typing | summaryin , splunk will show info for collect command.  So yes its the same command.
I nees a solution from scratch , if someone could help here?
You should accept ptrsnks answer not your reply.
Hi All, I don't have many resource to build an ideal network environment to forward logs to Splunk. So, I'm seeking a way to simulating or source to obtain many commonly data sources into Splunk (Li... See more...
Hi All, I don't have many resource to build an ideal network environment to forward logs to Splunk. So, I'm seeking a way to simulating or source to obtain many commonly data sources into Splunk (Like some SIEM solutions have scripts to forward syslog through port 514). Any answer will be highly appreciated. Regard.  
  I couldn't get "cird_address=remoteIP ."/32"" to work in my search. I created a more simple search and it worked fine.  Your suggestion was correct.  I need to do more work on my search. Thanks f... See more...
  I couldn't get "cird_address=remoteIP ."/32"" to work in my search. I created a more simple search and it worked fine.  Your suggestion was correct.  I need to do more work on my search. Thanks for your help!   Peter  
Being somewhat of a journeyman myself, the proper way to use timewrap was a bit of a mystery to me.  So, while the answer may be apparent to many, I was not sure how to wield the information. Than... See more...
Being somewhat of a journeyman myself, the proper way to use timewrap was a bit of a mystery to me.  So, while the answer may be apparent to many, I was not sure how to wield the information. Thank you for the response.  I will give it a go on Monday.
Yes I tried the .(dot) | eval  cird_address=remoteIP ./32 Error in 'EvalCommand': The expression is malformed. An unexpected character is reached at '/32'. | eval  cird_address=remoteIP ."/32" Th... See more...
Yes I tried the .(dot) | eval  cird_address=remoteIP ./32 Error in 'EvalCommand': The expression is malformed. An unexpected character is reached at '/32'. | eval  cird_address=remoteIP ."/32" This one does NOT show  an error, but i get no results.   Maybe there is something farther down in the search that's not correct. I check that and respond again. Thanks for your sugestion    
Note that I used System.Web.HttpUtility.JavaScriptStringEncode as a shortcut for encoding/escaping strings. KV_MODE = auto_escaped only handles a few escape sequences. If you prefer, you can simply r... See more...
Note that I used System.Web.HttpUtility.JavaScriptStringEncode as a shortcut for encoding/escaping strings. KV_MODE = auto_escaped only handles a few escape sequences. If you prefer, you can simply replace \ and " with \\ and \", respectively, in strings before writing them.
Hi @Ismail_BSA, We can use the SqlServer PowerShell module to read SQL Server audit files. As an administrator, install the SqlServer PowerShell module under PowerShell 5.1, which should be install... See more...
Hi @Ismail_BSA, We can use the SqlServer PowerShell module to read SQL Server audit files. As an administrator, install the SqlServer PowerShell module under PowerShell 5.1, which should be installed by default on all modern Windows releases: PS> Install-Module SqlServer With the module installed, we can read .sqlaudit files created by SQL Server using Read-SqlXEvent. Column/field information is available at https://learn.microsoft.com/en-us/sql/relational-databases/security/auditing/sql-server-audit-records. Columns with type bigint or varbinary will be read as byte arrays and must be converted to strings using a .NET object of the appropriate type. We can write a small PowerShell script to act as a stream reader for .sqlaudit files read by Splunk's archive processor (see below). Note that Read-SqlXEvent uses System.IO.Stream internally and calls Stream.Length, which throws "Stream does not support seeking" for forward-only streams. We'll work around this isssue by copying the stream to a temporary file, reading the temporary file, and finally, deleting the temporary file. C:\Temp\Stream-SqlAudit.ps1 $file = New-TemporaryFile $output = $file.Open([System.IO.FileMode]::Append, [System.IO.FileAccess]::Write) $stdin = [System.Console]::OpenStandardInput() $stdout = [System.Console]::Out $buffer = New-Object byte[] 16384 [int]$bytes = 0 while (($bytes = $stdin.Read($buffer, 0, $buffer.Length)) -gt 0) { $output.Write($buffer, 0, $bytes) } $output.Flush() $output.Close() Read-SqlXEvent -FileName "$($file.DirectoryName)\$($file.Name)" | %{ $event = $_.Timestamp.UtcDateTime.ToString("o") $_.Fields | %{ if ($_.Key -eq "permission_bitmask") { $event += " permission_bitmask=`"0x$([System.BitConverter]::ToInt64($_.Value, 0).ToString("x16"))`"" } elseif ($_.Key -like "*_sid") { $sid = $null $event += " $($_.Key)=`"" try { $sid = New-Object System.Security.Principal.SecurityIdentifier($_.Value, 0) $event += "$($sid.ToString())`"" } catch { $event += "`"" } } else { $event += " $($_.Key)=`"$([System.Web.HttpUtility]::JavaScriptStringEncode($_.Value.ToString()))`"" } } $stdout.WriteLine($event) } $file.Delete() We can use the invalid_cause and unarchive_cmd props.conf settings to call the PowerShell script. Note that unarchive_cmd strips or escapes quotes depending on the value unarchive_cmd_start_mode, so we've stored the PowerShell script in a path without spaces to avoid the use of quotes. If PowerShell can't find the path specified in the -File argument, it will exit with code -196608. Sample props.conf on forwarders, receivers (heavy forwarders or indexers), and search heads: [source::....sqlaudit] unarchive_cmd = powershell.exe -ExecutionPolicy RemoteSigned -File C:\Temp\Stream-SqlAudit.ps1 unarchive_cmd_start_mode = direct sourcetype = preprocess-sqlaudit NO_BINARY_CHECK = true [preprocess-sqlaudit] invalid_cause = archive is_valid = False LEARN_MODEL = false [sqlaudit] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%N%Z MAX_TIMESTAMP_LOOKAHEAD = 30 KV_MODE = auto_escaped We can use a batch or monitor stanza to monitor the directory containing .sqlaudit files. Use a batch stanza if the files are moved to the monitored directory atomically to allow Splunk Universal Forwarder to delete the files after they're indexed. Sample inputs.conf: [monitor://C:\Temp\*.sqlaudit] index = main sourcetype = sqlaudit The script can be refactored as a scripted input; however, using the archive processor allows Splunk to perform file and change tracking on our behalf.
Have you tried using the other concatenation operator - dot vs plus?
Hi Jason, Did you find a solution for this? 
add observation dashboard risks to splunk integration as a incident  im in AU: https://portal.XX.xdr.trendmicro.com/#/app/sase  please advise  thanks 
Hello, I am running a search that is returning IP addresses that are being sent to a waf (web access firewall).  The waf requires all IP addresses to be written in CIDR notation.  I am just returnin... See more...
Hello, I am running a search that is returning IP addresses that are being sent to a waf (web access firewall).  The waf requires all IP addresses to be written in CIDR notation.  I am just returning single IPs ,so I have to add a /32 to each address that I submit. I am using the stats command, looking at different parameters and them counting by IP to provide the list I am submitting.  It seems like it should be straight forward using concatenation, but I haven't been able to get to a solution. eval  cidr_address=remoteIP + "/32" and varieties  of this approach(casting to string etc)  haven't worked.  Appreciate any help anyone can provide.  
As @PickleRick says, please ignore the generative AI response. collect is the documented command and it is what you should use when you want to save data to an index from an SPL command  https://do... See more...
As @PickleRick says, please ignore the generative AI response. collect is the documented command and it is what you should use when you want to save data to an index from an SPL command  https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/SearchReference/Collect summaryindex is the command that is still used internally by Splunk when you enable summary indexing from within a scheduled saved search and is effectively a synonym for collect. Don't use it - it is not a documented command. A "summary index" is perhaps a poor name for the concept - collect allows you to push anything you like to an index and there is nothing special about that index. Yes, the original intention is that it should contain "summarised data", but in practice a summary index is just an index. Note that the behaviour of _time when you collect data to an index is not well documented. It can change depending on what your data looks like and if your search is done from a scheduled report or not.  
If you only have a common field of _time, are you planning on visual matching and how are you looking to match things inside that minute? You can also use stats to 'join' data together, but perhaps ... See more...
If you only have a common field of _time, are you planning on visual matching and how are you looking to match things inside that minute? You can also use stats to 'join' data together, but perhaps you can expand on your use case with an example so we can give more useful help.
You don't need to use the IN construct when using subsearches, as the default returned from a subsearch is    field=A OR field=B or field=C...   so in practice you can just do   index=index2 [ ... See more...
You don't need to use the IN construct when using subsearches, as the default returned from a subsearch is    field=A OR field=B or field=C...   so in practice you can just do   index=index2 [ search index=index1 service IN (22, 53, 80, 8080) | table src_ip | rename src_ip as dev_ip ] | table dev_ip, OS_Type   however, how many src_ips are you likely to get back from this subsearch? If you get a large number, this may not perform well at all. In that case you will have to approach the problem in a different way, e.g.   index=index2 OR (index=index1 service IN (22, 53, 80, 8080)) ``` Creates a common dev_ip field which is treated as the common field between the two indexes ``` | eval dev_ip=if(index=index2, dev_ip, src_ip) ``` Now we need the data to be seen in both indexes, so count the indexes and collect the OS_Type values and split by that common dev_ip field ``` | stats dc(index) as indexes values(OS_Type) as OS_Type by dev_ip ``` And this just ensures we have seen the data from both places ``` | where indexes=2 | fields - indexes   A third way to attack this type of problem is using a lookup, where you maintain a list of the src_ips you want to match for in a lookup table. Which one you end up with, will depend on your data and its volume as they will have different performance characteristics. Hope this helps
If this is dashboard logic, where do your parameters come from, presumably they are tokens from somewhere. If so, you can just construct the token appropriately so you have | search $my_token$ whe... See more...
If this is dashboard logic, where do your parameters come from, presumably they are tokens from somewhere. If so, you can just construct the token appropriately so you have | search $my_token$ where my_token is constructed elsewhere. It is from a multiselect dropdown? If so, just use the settings in the multiselect option to set the token prefix/delimiter values     
I don't believe you can use colour names, such as Green and Yellow, you have to use hex codes or RGB, see here https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML#Color_pale... See more...
I don't believe you can use colour names, such as Green and Yellow, you have to use hex codes or RGB, see here https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML#Color_palette_types_and_options in your case it's interesting that you have yellow, as I would expect black if it does not understand colour names. Have you tried <colorPalette type="expression">if(value == "up","#00FF00", "#FFFF00")</colorPalette>