All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

These all sort by field value of the last time value of the timechart, they are not sorting by name - what makes you think that?
I would rather have a straight search SPL solution, to drive the Trellis layout The proposed solutions I have seen have the effect of sorting the resulting | timechart fields by name rather than b... See more...
I would rather have a straight search SPL solution, to drive the Trellis layout The proposed solutions I have seen have the effect of sorting the resulting | timechart fields by name rather than by value.  The goal is sort by field value.  I appreciate the attempt so far.
adding to @victor_menezes  said.  When uploading apps to Splunk cloud you must pass various checks. Splunk Appinspect has various standards. (On premise is different) When you upload the app i... See more...
adding to @victor_menezes  said.  When uploading apps to Splunk cloud you must pass various checks. Splunk Appinspect has various standards. (On premise is different) When you upload the app it will run check_package_compression checks, and in there you can't have . or __MACOSX type for when on Mac "Splunk App does not contain any directories or files that start with a ., or directories that start with __MACOSX"   So, check your app compressed app file and remove any hidden folders or files it may even have a leading /.   tar -tzf your_file.tar.gz You may be able to use the git archive command (but need to ensure you remove hidden folder etc) It’s worth looking here for latest cloud check  updates, as this criterion for cloud does change https://dev.splunk.com/enterprise/docs/relnotes/relnotes-appinspectcli/whatsnew/  https://dev.splunk.com/enterprise/reference/appinspect/appinspectcheck/ 
You need to mind the basics of posing an answerable question: clearly illustrate data input (in text table), illustrate desired output (also in text unless it is a graphic visualization), and explain... See more...
You need to mind the basics of posing an answerable question: clearly illustrate data input (in text table), illustrate desired output (also in text unless it is a graphic visualization), and explain the logic in clear language.   It is really unclear what is the relationship between these field values.   If I have to speculate, Appname, and requestor are always paired, S.No and host are always paired.  Is this correct?  Based on this, I made up the following mock data: Appname S.No host logtype requestor source A 1 ghi somelog abc@def.com somelog.jkl A 1 ghi somelog abc@def.com somelog.klm A 1 ghi somelog abc@def.com somelog.lmn B 2 hij somelog bcd@efg.com somelog.klm B 2 hij somelog bcd@efg.com somelog.jkl C 3 ijk somelog cde@fgh.com somelog.lmn C 3 ijk somelog cde@fgh.com somelog.aaa A 4 xyz somelog abc@def.com somelog.opq A 4 xyz somelog abc@def.com somelog.opq A 4 xyz somelog abc@def.com somelog.bbb B 5 wxy somelog bcd@efg.com somelog.rst B 5 wxy somelog bcd@efg.com somelog.uvw C 6 vwx somelog cde@fgh.com somelog.uvw C 6 vwx somelog cde@fgh.com somelog.rst C 6 vwx somelog cde@fgh.com somelog.opq A 7 aaa somelog abc@def.com somelog.klm A 7 aaa somelog abc@def.com somelog.lmn A 7 aaa somelog abc@def.com somelog.jkl B 8 bbb somelog bcd@efg.com somelog.klm C 9 ccc somelog cde@fgh.com somelog.lmn C 9 ccc somelog cde@fgh.com somelog.aaa C 9 ccc somelog cde@fgh.com somelog.bbb A 10 hij somelog abc@def.com somelog.aaa B 11 jkl somelog bcd@efg.com somelog.aaa B 11 jkl somelog bcd@efg.com somelog.bbb B 11 jkl somelog bcd@efg.com somelog.ccc C 12 ijk somelog cde@fgh.com somelog.abc C 12 ijk somelog cde@fgh.com somelog.ccc As @ITWhisperer says, Splunk is not a spreadsheet.  You cannot have cell merge and such.  But if you really want to simulate the effect, you can do something like this:     | stats values(source) as source by Appname logtype requestor S.No host | rename S.No as S_No | eval source = mvjoin(source, ", ") | tojson S_No host source | stats values(_raw) as _raw by Appname requestor logtype | eval host = mvmap(_raw, spath(_raw, "host")) | eval S.No = mvmap(_raw, spath(_raw, "S_No")) | eval source = mvmap(_raw, spath(_raw, "source")) | table S.No Appname requestor logtype source     Result from the above mock data is S.No Appname requestor logtype source 1 10 4 7 A abc@def.com somelog somelog.jkl, somelog.klm, somelog.lmn somelog.aaa somelog.bbb, somelog.opq somelog.jkl, somelog.klm, somelog.lmn 11 2 5 8 B bcd@efg.com somelog somelog.aaa, somelog.bbb, somelog.ccc somelog.jkl, somelog.klm somelog.rst, somelog.uvw somelog.klm 12 3 6 9 C cde@fgh.com somelog somelog.abc, somelog.ccc somelog.aaa, somelog.lmn somelog.opq, somelog.rst, somelog.uvw somelog.aaa, somelog.bbb, somelog.lmn Is this something you are looking for? Here is mock data emulation:     | makeresults format=csv data="S.No, requestor, Appname, host, logtype, source 1, abc@def.com, A, ghi, somelog, somelog.jkl 1, abc@def.com, A, ghi, somelog, somelog.klm 1, abc@def.com, A, ghi, somelog, somelog.lmn 2, bcd@efg.com, B, hij, somelog, somelog.klm 2, bcd@efg.com, B, hij, somelog, somelog.jkl 3, cde@fgh.com, C, ijk, somelog, somelog.lmn 3, cde@fgh.com, C, ijk, somelog, somelog.aaa 4, abc@def.com, A, xyz, somelog, somelog.opq 4, abc@def.com, A, xyz, somelog, somelog.opq 4, abc@def.com, A, xyz, somelog, somelog.bbb 5, bcd@efg.com, B, wxy, somelog, somelog.rst 5, bcd@efg.com, B, wxy, somelog, somelog.uvw 6, cde@fgh.com, C, vwx, somelog, somelog.uvw 6, cde@fgh.com, C, vwx, somelog, somelog.rst 6, cde@fgh.com, C, vwx, somelog, somelog.opq 7, abc@def.com, A, aaa, somelog, somelog.klm 7, abc@def.com, A, aaa, somelog, somelog.lmn 7, abc@def.com, A, aaa, somelog, somelog.jkl 8, bcd@efg.com, B, bbb, somelog, somelog.klm 9, cde@fgh.com, C, ccc, somelog, somelog.lmn 9, cde@fgh.com, C, ccc, somelog, somelog.aaa 9, cde@fgh.com, C, ccc, somelog, somelog.bbb 10, abc@def.com, A, hij, somelog, somelog.aaa 11, bcd@efg.com, B, jkl, somelog, somelog.aaa 11, bcd@efg.com, B, jkl, somelog, somelog.bbb 11, bcd@efg.com, B, jkl, somelog, somelog.ccc 12, cde@fgh.com, C, ijk, somelog, somelog.abc 12, cde@fgh.com, C, ijk, somelog, somelog.ccc" ``` data emulation above ```  
so how can i ensure asset data correlation with logs as its based on ips ,anyway can it be done with hostname?
Hi @simuneer, your request isn't so easy to answer because there are many information missing: have you an asset list from your data (also from your Vulnerability Assessment tool)? have you a CVE... See more...
Hi @simuneer, your request isn't so easy to answer because there are many information missing: have you an asset list from your data (also from your Vulnerability Assessment tool)? have you a CVE list from a site (e.g. VulnDB) or a tool? have you a common key to correate your CVE list to the assets (e.g. OS)? having the above information, it's possible and we developed this for one of our customer but it isn't a question for the Community but a project. Ciao. Giuseppe
There is also two pure search versions which are very ugly Assuming this is a base search that creates the timechart | makeresults count=3600 | streamstats c | eval _time=relative_time(now(), "@m... See more...
There is also two pure search versions which are very ugly Assuming this is a base search that creates the timechart | makeresults count=3600 | streamstats c | eval _time=relative_time(now(), "@m") - c | eval type=mvindex(split("Budgeting,General Ledger,Payables,Expenses,eProcurement,Purchasing",","), random() % 6) | timechart fixedrange=f span=1m count by type then this version does a double transpose - the good thing about the double transpose is that it does not change the column order on the second transpose - however it does require that you know the sort column number - although you could work that out in a separate dashboard search | transpose 0 | sort - "row 60" | transpose 0 header_field=column | fields - column this version uses a mechanism to get the column names, sort them and then prefix the column with a numeric which will cause the columns to be ordered correctly, however, you can't rename them back as Splunk will then reorder alphabetically. | appendpipe [ | stats latest(*) as * | transpose 0 | sort - "row 1" | streamstats c | eval name="_".c."_".column | fields name ] | eventstats values(name) as _name | fields - name | foreach * [ eval f=mvindex(_name, mvfind(_name, "<<FIELD>>")), f=replace(f, "^_", ""), {f}='<<FIELD>>' | fields - "<<FIELD>>" ] | fields - f | where isnotnull(_time)  
Hi @shadysplunker, sorry, maybe I misunderstood. please, describe better your requirements because tcpout is only for sending logs from a UF to an Indexer, why are you speaking of http?  in the ab... See more...
Hi @shadysplunker, sorry, maybe I misunderstood. please, describe better your requirements because tcpout is only for sending logs from a UF to an Indexer, why are you speaking of http?  in the above link there's a description of performing a selective addressing of logs from an UF to IDXs. Ciao. Giuseppe  
Hi @VijaySrrie , In general, you have only to read them using a Forwarder (Universal or heavy), if you haven't a Forwarder in the server where the reports are stored, you should copy them in a serve... See more...
Hi @VijaySrrie , In general, you have only to read them using a Forwarder (Universal or heavy), if you haven't a Forwarder in the server where the reports are stored, you should copy them in a server with Forwarder. But the question is: what's the format of these reports? they are readable if they are in text format (.txt or .csv) not pdf or .xlsx. To bring them, you have only to create an input stanza in inputs.conf (on the Forwarder) to read these reports. Ciao. Giuseppe
I have wasted so many hours trying to troubleshoot why my ForwardedEvents were not being ingested into the index. Thank you, this fixed the issue. The formatting of the search is very different tho... See more...
I have wasted so many hours trying to troubleshoot why my ForwardedEvents were not being ingested into the index. Thank you, this fixed the issue. The formatting of the search is very different though, and not all fields are showing up in the results; not sure why. Edit: So how can I get new ingested events to look the same? And have the same fields? E.g. I'm only using Splunk to ingest forwarded applocker logs. I can't display fields for publisher or file path for newly ingested events. They only show up for old ones that were ingested before the issue.   Edit 2: Fixed it I think by adding this line back in: renderXML = 1
Just in case anyone has the same issue as me. My KV store was broken which stopped task server from starting splunkd.log 05-31-2024 01:59:56.804 +0000 ERROR ExecProcessor [1322 ExecProcessor] - ... See more...
Just in case anyone has the same issue as me. My KV store was broken which stopped task server from starting splunkd.log 05-31-2024 01:59:56.804 +0000 ERROR ExecProcessor [1322 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery.sh" 01:59:56.804 [main] INFO com.splunk.dbx.utils.SecurityFilesGenerator - Initializing secret kv store collection 05-31-2024 01:59:56.867 +0000 ERROR ExecProcessor [1322 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery.sh" 01:59:56.867 [main] INFO com.splunk.dbx.utils.SecurityFilesGenerator - secret KV Store not found, creating mongod.log 2024-05-31T03:20:05.019Z I ACCESS [main] permissions on /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key are too open Changing the permissions on /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key to 400 and restarting fixed the issue for me
There are a couple of ways (at least) to do this Here is an example dashboard - that shows three techniques. 1. Uses a token in a base search to define the sort order as a token ($sort_order$) - th... See more...
There are a couple of ways (at least) to do this Here is an example dashboard - that shows three techniques. 1. Uses a token in a base search to define the sort order as a token ($sort_order$) - there is an annoying issue with this method, which means that once the trellis is shown with the order, it will NOT reorder the trellis if the underlying table order changes. 2. Uses 6 separate single panels aligned horizontally and six tokens that define the display for the viz. ($f0$ to $f5$). This re-orders on change 3. Uses @chrisyounger number set viz - https://splunkbase.splunk.com/app/4537 which will do all this for you and does not require tokens and will re-order when things change There is possibly a search way directly to do it (appendpipe...?), but with the dashboard, it's pretty simple <dashboard version="1.1" theme="light"> <label>Sort_TC</label> <row> <panel> <table depends="$hidden$"> <search id="base"> <query> | makeresults count=3600 | streamstats c | eval _time=relative_time(now(), "@m") - c | eval type=mvindex(split("Budgeting,General Ledger,Payables,Expenses,eProcurement,Purchasing",","), random() % 6) | timechart fixedrange=f span=1m count by type </query> <earliest>-60m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> <table depends="hidden$"> <title>Sorting order is $sort_order$</title> <search base="base"> <done> <set token="sort_order">$result.sort$</set> <set token="f0">$result.f0$</set> <set token="f1">$result.f1$</set> <set token="f2">$result.f2$</set> <set token="f3">$result.f3$</set> <set token="f4">$result.f4$</set> <set token="f5">$result.f5$</set> </done> <query>| tail 1 | fields - _span _time | transpose 0 | sort - "row 1" | stats list(column) as sort list("row 1") as counts | foreach 0 1 2 3 4 5 [ eval f&lt;&lt;FIELD&gt;&gt;=mvindex(sort, &lt;&lt;FIELD&gt;&gt;) ] | eval sort="\"".mvjoin(sort, "\" \"")."\""</query> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> <single> <search base="base"> <query>| table _time $sort_order$</query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">1</option> <option name="useColors">1</option> </single> </panel> </row> <row> <panel> <single> <title>$f0$</title> <search base="base"> <query> | table _time $f0|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> <single> <title>$f1$</title> <search base="base"> <query> | table _time $f1|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> <single> <title>$f2$</title> <search base="base"> <query> | table _time $f2|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> <single> <title>$f3$</title> <search base="base"> <query> | table _time $f3|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> <single> <title>$f4$</title> <search base="base"> <query> | table _time $f4|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> <single> <title>$f5$</title> <search base="base"> <query> | table _time $f5|s$ </query> </search> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="useColors">1</option> </single> </panel> </row> <row> <panel> <viz type="number_set_viz.number_set_viz"> <search base="base"> <query> | stats sparkline(max(*)) as sparkline_* latest(*) as * | appendpipe [ | foreach sparkline_* [ eval &lt;&lt;MATCHSTR&gt;&gt;='&lt;&lt;FIELD&gt;&gt;'] ] | fields - sparkline_* | transpose 0 | rename "row 1" as value, column as title "row 2" as sparkline | sort - value </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </viz> </panel> </row> </dashboard>  
Hi Team, We have some reports in a shared path, how to bring it to splunk?
Before this stats command also add this | filldown Appname so that empty Appname rows will adopt the name from above
Are you clear on the difference between NULL and EMPTY - your mvmap is checking for non-EMPTY values of one of the values of the MV field, it is not checking for NULL This pair of lines is looping e... See more...
Are you clear on the difference between NULL and EMPTY - your mvmap is checking for non-EMPTY values of one of the values of the MV field, it is not checking for NULL This pair of lines is looping each of the MV values of impConReqID and removing only EMPTY values  | eval ImpConReqID= coalesce(ImpConReqId,impConReqId1) ... | eval ImpCon=mvmap(ImpConReqID,if(match(ImpConReqID,".+"),"ImpConReqID: ".ImpConReqID,null())) so if you have real null values in your MV, then you need to check for null, not empty, i.e. | eval ImpCon=mvmap(ImpConReqID,if(isnotnull(ImpConReqID),"ImpConReqID: ".ImpConReqID,null()))
OK, so try the search as I expect that will give you what you want
こんにちは。初めてのため、不手際があるかもしれません。 $SPLUNK_HOME/etc/passwdに以下のフィールドがあると思いますが、<?1>と<?2>に入る内容について教えて頂きたいです。 : <ログインユーザー名> : <パスワード> : <?1> : <splunk Web上の名前> : <ロール> : <メールアドレス> : <?2> : < 「初回ログイン時にパスワードを要求... See more...
こんにちは。初めてのため、不手際があるかもしれません。 $SPLUNK_HOME/etc/passwdに以下のフィールドがあると思いますが、<?1>と<?2>に入る内容について教えて頂きたいです。 : <ログインユーザー名> : <パスワード> : <?1> : <splunk Web上の名前> : <ロール> : <メールアドレス> : <?2> : < 「初回ログイン時にパスワードを要求する」にすると[force_change_pass]の値を入力します> : <数字(何の数字か不明、uid?)> また、認識と違う箇所があれば教えてください。    
Hey @meshorer ,   Have you tried to ssh to your Phantom/SOAR server and run this? phenv set_preference --indicators yes This should be enough to reenable the indicators for you. Once executed ... See more...
Hey @meshorer ,   Have you tried to ssh to your Phantom/SOAR server and run this? phenv set_preference --indicators yes This should be enough to reenable the indicators for you. Once executed please allow it up to 5 minutes for the system to start the indicators again.
Hey Alex, You mean you are trying to use your tar.gz from gitlab, it can be the case git itself is appending any other folders to the root dir structure in that tar.gz? Check the content to see if t... See more...
Hey Alex, You mean you are trying to use your tar.gz from gitlab, it can be the case git itself is appending any other folders to the root dir structure in that tar.gz? Check the content to see if that is the case (including hidden ones). It may be the case to simple remove them and try again.
Hi @SureshkumarD, I tried out your code - the rows aren't showing up as links because of the table formatting / row color setting. Remove this line from your code: "rowColors": "> rowBackgroun... See more...
Hi @SureshkumarD, I tried out your code - the rows aren't showing up as links because of the table formatting / row color setting. Remove this line from your code: "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)",   That is causing the "link" effect on the clickable rows to disappear. Here's the full viz code: "visualizations": { "viz_qFxEKJ3l": { "type": "splunk.table", "options": { "count": 5000, "dataOverlayMode": "none", "drilldown": "none", "backgroundColor": "#FAF9F6", "tableFormat": { "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)", "headerBackgroundColor": "> backgroundColor | setColorChannel(tableHeaderBackgroundColorConfig)", "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)" }, "eventHandlers": [ { "type": "drilldown.customUrl", "options": { "url": "$row.URL.value|n$", "newTab": true } } ],  Give that a go on your dashboard.