Activity Feed
- Got Karma for Re: How do I disable redirection warning?. 07-23-2024 08:28 AM
- Posted Re: How do I disable redirection warning? on Dashboards & Visualizations. 06-27-2024 03:09 PM
- Posted Re: alert_actions.conf being ignored on Alerting. 04-01-2024 12:47 PM
- Karma Re: alert_actions.conf being ignored for claudio_manig. 04-01-2024 12:47 PM
- Posted Re: グラフのY軸の書式設定 on Splunk Search. 05-15-2023 08:55 AM
- Tagged Re: グラフのY軸の書式設定 on Splunk Search. 05-15-2023 08:55 AM
- Posted Re: グラフのY軸の書式設定 on Splunk Search. 05-12-2023 11:04 AM
- Posted Re: グラフのY軸の書式設定 on Splunk Search. 05-01-2023 08:43 AM
- Posted Re: Streamを使用してWireDataを視覚化 on All Apps and Add-ons. 04-18-2023 06:58 PM
- Posted Re: 特定期間を経過したデータを抽出したい (How to extract data after a certain period of time has passed?) on Splunk Search. 04-07-2023 10:49 AM
- Posted Re: 集計軸が違う場合にCount数を加工して出力する方法についてお教え下さい (How to process and output the count number when the aggregation axis is differ on Splunk Search. 04-07-2023 10:27 AM
- Got Karma for Re: メインサーチのイベントの時間をサブサーチに渡したい. 02-15-2023 04:27 PM
- Posted Re: メインサーチのイベントの時間をサブサーチに渡したい on Splunk Search. 02-15-2023 03:46 PM
- Karma Re: What is the cause of image not displayed in screeenshotmachine? for kaede_oogami. 11-08-2022 05:27 AM
- Posted Re: Deployment Server 別にサーチ結果を分ける方法 on Splunk Search. 11-03-2022 10:43 AM
- Got Karma for Re: Deployment Server 別にサーチ結果を分ける方法. 11-02-2022 09:02 PM
- Posted Re: Deployment Server 別にサーチ結果を分ける方法 on Splunk Search. 11-02-2022 02:58 PM
- Karma Re: What is the cause of image not displayed in screeenshotmachine? for phanTom. 11-02-2022 07:08 AM
- Posted Re: What is the cause of image not displayed in screeenshotmachine? on Splunk SOAR. 11-02-2022 06:34 AM
- Posted Re: What is the cause of image not displayed in screeenshotmachine? on Splunk SOAR. 10-28-2022 10:28 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
1 | |||
0 |
06-27-2024
03:09 PM
1 Karma
Here is the answer: https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Web-featuresconf#.5Bfeature:dashboards_csp.5D in web-features.conf, there is a stanza called [feature:dashboards_csp] where you can allow list domains like this: dashboards_trusted_domain.<name> = <string> aka dashboards_trusted_domain.smartsheet = app.smartsheet.com
... View more
05-15-2023
08:55 AM
Thank you for sharing that, I understand the problem. However, I have done a reasonably thorough review of the internet and while many people have this same question, I was not able to find anyone who had solved it in a way I could reproduce. What if we changed minutes to hours? it is not ideal, because you end up with decimal hour units, 8.6 hours etc, but it is more intuitive perhaps than minutes and it is easily possible. You could waste many hours putting together an alternative when this gets you most of the way. | makeresults | eval _raw="baseDate,start,end,procTime 2023/05/01,2023/05/01 09:00:14,2023/05/01 09:03:17,183 2023/05/01,2023/05/01 09:03:17,2023/05/01 09:04:57,100 2023/05/01,2023/05/01 09:04:57,2023/05/01 09:08:48,231 2023/05/02,2023/05/02 09:00:11,2023/05/02 09:03:18,187 2023/05/02,2023/05/02 09:03:18,2023/05/02 09:05:31,133 2023/05/02,2023/05/02 09:05:31,2023/05/02 09:09:14,223 " | multikv forceheader=1 | eval pHours = procTime/60 | chart sum(pHours) as Hours by baseDate | eval Hours = round(Hours,1) I am sorry I wasn't able to be of more help. Splunk tempts us with how much it CAN do, but it still has many things it cannot. それを共有していただきありがとうございます、問題は理解しました。しかし、インターネットをかなり徹底的に調査したにもかかわらず、この同じ質問を持つ多くの人々がいる一方で、私が再現できる方法で解決した人を見つけることはできませんでした。 代替案として、分を時間に変更するとどうなるでしょうか?残念ながら、これはあまり理想的ではありません。なぜなら、あなたは小数点以下の時間単位、例えば8.6時間などになってしまうからです。しかし、それは分よりも直感的かもしれませんし、それは容易に可能です。あなたはこれが大部分を満たすときに、代わりに多くの時間を組み立てることに時間を浪費するかもしれません。 もっと役に立てずに申し訳ありません。Splunkはその能力を示唆し、多くのことができると示しますが、まだできないことも多くあります。
... View more
- Tags:
- japanese
05-12-2023
11:04 AM
Thank you for the sample data set. If I am understanding you correctly, all you need is this: | eval pTimeFull=tostring(pTime, "duration") the full example looks like this: | makeresults | eval _raw="baseDate,start,end,procTime 2023/05/01,2023/05/01 09:00:14,2023/05/01 09:03:17,183 2023/05/01,2023/05/01 09:03:17,2023/05/01 09:04:57,100 2023/05/01,2023/05/01 09:04:57,2023/05/01 09:08:48,231 2023/05/02,2023/05/02 09:00:11,2023/05/02 09:03:18,187 2023/05/02,2023/05/02 09:03:18,2023/05/02 09:05:31,133 2023/05/02,2023/05/02 09:05:31,2023/05/02 09:09:14,223 " | multikv forceheader=1 | chart sum(procTime) as pTime by baseDate | eval pTime=tostring(pTime, "duration") Splunk has a built in toString method that converts seconds to human readable H:M:S format. Was that able to solve your issue?
... View more
05-01-2023
08:43 AM
There is a way to do this. the first question is: in your data, is the _time field of the event equal to the processing time (処理時間)? if it is, then you can probably do something like: |timechart count(something) - this charts x values over y time axis. | eval _time = strftime(_time, "%H:%M:%S") - this takes the time field and displays just the hours, minutes, and deconds, seperated by ':' symbols. If the _time field of the event is NOT the same as the Processing Time field, it is a little harder to guess at the answer but should be similiar: | eval _time = strptime(処理時間, "%H:%M:%S") - this turns the human readable time into computer readable time |timechart count(something) - this charts x values over y time axis. If you are able to share a single event i could probably do better: 【日本語訳】 これを行う方法があります。まず、質問ですが、イベントの_timeフィールドは処理時間と同じですか? もしそうであれば、おそらく以下のようにできます: |timechart count(something) - これはx軸を時間軸に沿ってチャート化します。 | eval _time = strftime(_time, "%H:%M:%S") - これにより、時間フィールドが、時、分、秒で表示されます。 イベントの_timeフィールドが処理時間フィールドと異なる場合は、少し推測する必要がありますが、同様になるはずです: | eval _time = strptime(処理時間, "%H:%M:%S") - これにより、人間が読み取れる時間がコンピュータが読み取れる時間に変換されます。 |timechart count(something) - これはx軸を時間軸に沿ってチャート化します。 もし単一のイベントを共有できる場合は、よりよくできるかもしれません。 Translated by ChatGPT.
... View more
04-18-2023
06:58 PM
ok, lets say MY server A is named "splunk.matt.com". in that case, my /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf would look like: [streamfwd://streamfwd]
splunk_stream_app_location = https://splunk.matt.com:8000/en-us/custom/splunk_app_stream/
disabled = 0 so for you it would be [streamfwd://streamfwd] splunk_stream_app_location = https://<nanikananika.nanika.com>:8000/ja-jp/custom/splunk_app_stream/ disabled = 0 As for server B, this is the part of the docs that tell you: Install an Independent Stream Forwarder - Splunk Documentation. Note that the splunk on server A will generate a curl command for you. The curl command will cause server B to download and install from server A. here is the part translated by Chat GPT: curlを使用してIndependent Streamフォワーダーをインストールする Splunk App for Stream(splunk_app_stream)は、コマンドラインから実行できるcurlスクリプトを生成します。これを使用してフォワーダーをインストールできます。 1. Splunk App for Streamのメインメニューで、[設定] > [分散フォワーダー管理] をクリックします。 2. [Streamフォワーダーのインストール] をクリックします。[Streamフォワーダーのインストール]ウィンドウが表示されます。 3. curlスクリプトをコピーします。 4. Independent StreamフォワーダーをインストールしたいLinuxマシン(Server B)にSSHでアクセスします。 5. splunk_app_streamからコピーしたcurlスクリプトを実行します。例えば: curl -sSL https://<nanikananika.nanika.com>:8000/ja-jp/custom/splunk_app_stream/ | sudo bash 各プロンプトでyesかnoと答えて、streamfwdバイナリをダウンロード、インストール、起動します。 I hope that helps.
... View more
04-07-2023
10:49 AM
To do this properly you would need to do this at index time so that proper event breaking would occur. If you are trying to do event breaking at search time this becomes much harder. If this data was properly event broken, each event would have the correct time assigned to it. this is the best practice. However I know that is not always within your control. If the above is not possible, then I would start with a rex command with the max_match=0 parameter to capture each pattern repeatedly. it might look like: | rex max_match=0 field=_raw (or whatever the field is named) ".*(<ipAddress>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})(?<connectionTime>.*)(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) This would create 2 new fields per event, one with ALLthe ip address and one with ALL the time field values. so seperate the multivalue fields, you can use mvexpand, it will create a duplicate event for each value. once you captured a time field, you might then need to use strftime to convert it into Epoch time so that splunk can put it in time order for you. eval _time= strptime(connectionTime, "%Y-%m-%d %H:%M:%S.%N") As you can see, the method to fix this at search time is very complicated. this search is poorly optimized and they are many points of failure. The best way to proceed, especially if you want to use this data long term, is to set up proper event breaking. If your Splunk teams needs any help with the event breaking, I'd be happy to walk them through it.
... View more
04-07-2023
10:27 AM
if i understand your question, I might start by using the split() or rex command to turn the field into two fields: |rex field="接続プロトコル" "(?<user>[^\|]+)\|(<?protocol>.*)" the intent here is captur everyting until '|' and use it to create a 'user' field. then capture every after '|' and use it to create a protocol field. once you have two fields, you can: |stats count by user protocol
... View more
02-15-2023
03:46 PM
1 Karma
sorry to be so slow to respond. so there are a few ways to do this. starting with your example, you need to change the '=' sign to '>' and '<' signs. so earliest>earlytime latest<latesttime. This will mean that splunk will show events that happen during the time between those two.
... View more
11-03-2022
10:43 AM
I think I understand, you are wondering if there is an easier way. You may also want a long term solution, one that automatically updates itself, not a one time solution. Unfortunately, to the best of my knowledge, there is not a field in Splunk by default that displays a hosts server class membership. The good news is we can create the field we need. We can create the field called server_class. And we can populate that field with the correct values. We can create such a field, either with a lookup table or tagging, or an index time field extraction. For me, I would build on the method I suggested above, create a scheduled search on the deployment server that lists the members of each server class. I would configure this report to write the results to an index. I would then schedule a search on the search head to turn this data into a lookup table. the result would be a self-updating lookup table that lists all hosts and which server classes they are a member of. Step 1 - scheduled report on deployment server I would use this search: | rest /services/deployment/server/clients count=0 splunk_server=local | stats values(serverClasses.*.stateOnClient) as * by hostname | untable hostname server_class dummy |fields - dummy |collect index=test sourcetype=serverclass this search will list ALL clients and server classes, then it will remove the field name dummy as it is not needed. it then writes that data to an index, in this case the 'test' index. It also applies a sourcetype=serverclass. Also please note that I used the untable command to create a field called 'server_class'. I will then schedule this to run every day, so it will be self-updating. Step 2 - Schedule a search that creates a lookup table on the search head. Now that I have run the above search at least once, there will be new event data in my 'test' index with a sourcetype of serverclass. I can now run a search that will create a table of all hosts and the server classes they are a part of. index=test sourcetype=serverclass |stats count by server_class hostname |field - count |outputlookup serverClassLookup.csv this search will create a table of all hosts and server classes and then create a lookup file with those contents. I will schedule this search same as the other one, so that it will update itself regularly. Step 3 - set up lookup definition and automatic lookup I will omit a detailed description of these steps in the interest of time, if you want more details on how to do so, I am happy to elaborate. For now, let assume you did so. You are now able to do a basic Splunk search such as index=wineventlog server_class=Airwatch. Now there are other ways to solve this same problem and MANY MANY details and decisions I have skipped over in order to keep things simple. And there are plusses and minuses for the various options. This is the best option in my opinion but it depends on your circumstances. (日本語) ------------------- I think I understand, you are wondering if there is an easier way. You may also want a long term solution, one that automatically updates itself, not a one time solution. わたしは適当に理解していると思います、もっと簡単な方法があるかどうか疑問に思っているんでしょう。 Unfortunately, to the best of my knowledge, there is not a field in Splunk by default that displays a hosts server class membership. 残念ながら、私の知る限り、Splunkにはデフォルトでserver classを表示するフィールドはありません。 The good news is we can create the field we need. We can create the field called server_class. And we can populate that field with the correct values. と言っても、必要な server_class というフィールドを作成することができます。 それに、そのフィールドに正しい値を設定できます。 We can create such a field, either with a lookup table or tagging, or an index time field extraction. このようなフィールドを作成するように、それぞれの方法はあります。ルックアップテーブルまたはタグ付け、またはインデックス時間フィールドの抽出を使用して作成できます。 For me, I would build on the method I suggested above, create a scheduled search on the deployment server that lists the members of each server class. I would configure this report to write the results to an index. 私にとっては、上記で提案した方法に基づいて、各サーバー クラスのメンバーを一覧表示するスケジュールされた検索を配置サーバーに作成します。そして、「| collect コマンド」をして、Splunkのインデックスに書き込むようにこのレポートを構成します。 I would then schedule a search on the search head to turn this data into a lookup table. the result would be a self-updating lookup table that lists all hosts and which server classes they are a member of. 次に、Search Headで検索をスケジュールして、このデータをルックアップテーブルに変換します。そうすると、ルックアップテーブが自動的に更新されます。 Step 1 - scheduled report on deployment server I would use this search: | rest /services/deployment/server/clients count=0 splunk_server=local | stats values(serverClasses.*.stateOnClient) as * by hostname | untable hostname server_class dummy |fields - dummy |collect index=test sourcetype=serverclass this search will list ALL clients and server classes, then it will remove the field name dummy as it is not needed. it then writes that data to an index, in this case the 'test' index. この検索では、すべてのクライアントとサーバークラスが一覧表示され、不必要なdummyというフィールドも削除されます。 It also applies a sourcetype=serverclass. Also please note that I used the untable command to create a field called 'server_class'. I will then schedule this to run every day, so it will be self-updating. また、sourcetype=serverclassも適用されます。untableコマンドを使用して「server_class」というフィールドの名前を作成したことに注意してください。その後、これを毎日実行するようにスケジュールするので、自己更新になります。 Step 2 - Schedule a search that creates a lookup table on the search head. Now that I have run the above search at least once, there will be new event data in my 'test' index with a sourcetype of serverclass. I can now run a search that will create a table of all hosts and the server classes they are a part of. 上記の検索を少なくとも一度実行したので、 'index=test'には、sourcetype=serverclass付けの新しいイベントデータがあるはずです。これで、ルックアップテーブル生成検索を実行できます。 時間がなくなってきたので、ここで日本語はやめます。おそらくそれを本当に必要としなかったでしょうが、私にはいい勉強になってけっこう楽しかったです。 index=test sourcetype=serverclass |stats count by server_class hostname |field - count |outputlookup serverClassLookup.csv this search will create a table of all hosts and server classes and then create a lookup file with those contents. I will schedule this search same as the other one, so that it will update itself regularly. Step 3 - set up lookup definition and automatic lookup I will omit a detailed description of these steps in the interest of time, if you want more details on how to do so, I am happy to elaborate. For now, let assume you did so. You are now able to do a basic Splunk search such as index=wineventlog server_class=Airwatch. Now there are other ways to solve this same problem and MANY MANY details and decisions I have skipped over in order to keep things simple. And there are plusses and minuses for the various options. This is the best option in my opinion but it depends on your circumstances.
... View more
11-02-2022
02:58 PM
1 Karma
日本語に翻訳する練習を楽しんでいます。少し後で戻ってしてみます。 If I am understanding you correctly, you want to list all clients for a given server class in a spunk search. I found a search online and tested it on my deployment server. | rest /services/deployment/server/clients count=0 splunk_server=local | stats values(serverClasses.*.stateOnClient) as * by hostname | untable hostname ServerClassNames dummy |search ServerClassNames="*Airwatch*" | stats values(hostname) as host | format It is a little tricky, I will try to explain each step. The first thing to know is this command can only be run on the deployment server. It will not work if you run it on the search head. So on your deployment server, go to search app and paste it there. Line #1 | rest /services/deployment/server/clients count=0 splunk_server=local | stats values(serverClasses.*.stateOnClient) as * by hostname This line uses the REST command to list all deployment clients and lots of information about each. The stats command filters that information to just the fields that display server class. Line#2 - | untable hostname ServerClassNames dummy the untable command takes the column names and turns them into field names Line#3 - | search ServerClassNames="Airwatch*" In my environment, I have a server class called "Airwatch" , this line filters down to just members of that server class. You would type your own server class name there. Line#4 - | stats values(hostname) as host | format Because we are on the deployment server, we cannot search the indexers. So this line creates a list of key value pairs, host=<hostname>, one for each. this we will copy and paste into the search bar of the search head. For the next step we will log into our search head. In my example I searched windows event logs: I type my base search, index=wineventlog and paste my formatted list of hosts: I know this is complicated but I do not believe there is an easier way. I hope this helps.
... View more
11-02-2022
06:34 AM
Ok, good to know, the problem not specific to this one url. So you said you could download the file. Are you able to view it with any other programs, like with a browser, or photo shop? How about if you add .jpg to the end? In my testing, the object I downloaded had no file extension, the default encoding screenshot machine uses is jpg. On my test installation, I did not define a file naming convention, I don't recall them being .jpg though. I might be tempted to try a different screenshot app and test it. Maybe this one:https://splunkbase.splunk.com/app/5444 If you use screenshot machine web interface to grab a screenshot, does it work there?
... View more
10-28-2022
10:28 AM
You are correct, I see I made an error. I dropped the 'phantom' segment of the url. May I assume you caught my error and tried it anyway, with the correct URL? あ、たしかに、間違えたようですね。URLのPHANTOMっいう部分を入れ忘れました。kaede_oogamiさんは間違い気づいて正しいURLをしてみたでしょうね。 I do not think you installed the application in the wrong location, at least I can see no evidence to suggest that. I would expect the vault_file_path value to be EXACTLY as it appears in your screenshot. 間違った場所にアプリケーションをインストールしたとは思わない、少なくともそれを示唆する証拠はないと思います。vault_file_pathとは、スクリーンショットに表示されているとおりであることだと思います。 Putting that aside for a moment, here is another way you can download the screen shot object. Please open the event that contains the screenshotmachine action and the file object it created. I have uploaded a screenshot for reference. 少し脇に置いて、スクリーンショットのファイルをダウンロードする別の方法があります。スクリーンショットマシンアクションとそ作成したファイルを含むeventを開いてください。 私が参考のためにスクリーンショットをアップロードしました。 If you click on the Files (1) section, you will see a list of file objects, it should include the screenshot you took with screenshotmachine. FILESというセクションをクリックすると、ファイルのリストが表示され、スクリーンショットマシンで撮影したスクリーンショットが含まれているはずです。 Do you see the ellipsis to the right, I have put a (2) next to it. If you click on that, you can download the file object and then try to open it with other programs. それで右側に'...'(省略記号)が表示されていますか?そこに(2)を付けて置きました。 それをクリックすると、ファイルをダウンロードして、そうすると他のプログラムで開いてみることができます。 I don't know what the target URL site is but it is possible that it is not an image file but some sort of a payload, some script or code. In that case it might not decode into an image. ターゲットURLサイトが何であるかはわかりませんが、画像ファイルではなく、なにかのペイロード、スクリプト、またはコードである可能性があります。 その場合、イメージにデコードされない可能性があります。 Are you able to get screenshots of other websites? Do you know how to create 'test' events in SOAR and use the 'ACTION' button to manually run a screenshot action? 他のウェブサイトのスクリーンショットを取得できますか? SOARで「テスト」イベントを作成し、「アクション」ボタンを使用してスクリーンショットアクションを手動で実行する方法を知っていますか?
... View more
10-27-2022
06:25 AM
Since the screenshot is turning into an icon, I wonder if it is the browser is unable to decode the file. What browser are you using? Can you try another or update the current browser? Are you able to download the file via the REST command I shared? Are you able to view it that way?
... View more
10-26-2022
02:12 PM
My Japanese is not perfect, I will try in both languages. 日本語はすみませんが。。。 Do you know the event ID where this action was performed? この画像を取得したアクションのevent ID を知っていますか? Usually, you access the contents of a vault via the event(container) it is attached to. In the GUI, this means opening Sources > click on Event that the screenshotmachine action took place in, and view it in Files tab. 普通にはvaultの内容にはeventを開いて見ることにします。最初にGUIではソースを開き、screenshotmachine の画像を取得したアクションのeventにクリックをします。そうして、下にあるscreenshotの見たようなスクリーンが見られるでしょう。でActivityやFilesなどのところでscreenshotmachineのスクリーンショットがあるんです。 It CAN be done via the REST API, but it is very clunky. There are a few ways but if all you have is the vault information: RESTAPIをすることもできますがちょっと扱いにくいと思います。標準ワークフローではありませんし。でもscreenshotmachineのアクションは成功するために使うでしょう。 VAULT_FILE_PATHをしてダウンロードをするURLのかたちは下です。 https://YOURPHANTOMHOSTNAME.DOMAIN.COM/download?document={vault_file_path} for your screenshot below, this would be: 元のメッセージからのVAULT_FILE_PATHを入れたら: https://YOURPHANTOMHOSTNAME.DOMAIN.COM/download?document=/opt/vault/51/37/209e0358057f0b921aac365f70b17f0e067d90a4 When you download via vault file path, you get a tgz of the file system, with a file named 209e0358057f0b921aac365f70b17f0e067d90a4. this file can be opened in a browser. VAULT_FILE_PATHをつかって.tgzファイルが出てきます。そのファイルを解凍すると、/opt/vault/51/37 という名前のディレクトリと、ファイル拡張子のない 209e0358057f0b921aac365f70b17f0e067d90a4というファイルが表示されます。そのファイルはブラウザで開くことができます。 My advice is to find the event the screenshot is connected to. スクリーンショットが接続されているeventを見つけたほうがいいと思います。
... View more
12-04-2020
09:39 PM
1 Karma
I just installed docker and the Splunk Connect for Syslog app(?). I configured the env_file to point to my http event collector and have configured the indices, and have received the test events. How do I actually configure listening on a port? the documentation here: https://splunk-connect-for-syslog.readthedocs.io/en/master/configuration/ says: Other than device filter creation, SC4S is almost entirely controlled by environment variables. Here are the categories and variables needed to properly configure SC4S for your environment. Where do I configure these environmental variables? Perhaps /opt/sc4s/local/config, but like what file type, what schema? I mean, I can read, the key/value pair isSC4S_LISTEN_DEFAULT_TLS_PORT=whatever. but where do I put that? I was trying to set up receiving of firewall logs from pfsense, the documentation for it says: Review and update the splunk_metadata.csv file and set the index and sourcetype as required for the data source. So maybe this is this the answer, I should create a csv? that doesn't sound right. Probably if I knew Docker I would know the answer to all these questions. but if anyone could educate me on how to use this, show me some example configurations and show me the filepaths they are located in, I would be deeply appreciative. <edit> Nevermind, I found it. The answer is, most things are configured in /opt/sc4s/env_file. indexes and sourcetypes are configured in /opt/sc4s/local/context/splunk_metadata.csv. in the spirit of intellectual honesty, it was in the docs in a couple places, namely the Getting Started section in the os and container specific section, although not in ALL of them. If I may make a request to the app developers. I think adding the two paragraph below to the Quickstart Guide would have helped, i think it is an intuitive place to look for people that missed it the first time. Dedicated (Unique) Listening Ports For certain source technologies, categorization by message content is impossible due to the lack of a unique “fingerprint” in the data. In other cases, a unique listening port is required for certain devices due to network requirements in the enterprise. For collection of such sources, we provide a means of dedicating a unique listening port to a specific source. Follow this step to configure unique ports for one or more sources: Modify the /opt/sc4s/env_file file to include the port-specific environment variable(s). Refer to the “Sources” documentation to identify the specific environment variables that are mapped to each data source vendor/technology. Modify index destinations for Splunk Log paths are preconfigured to utilize a convention of index destinations that are suitable for most customers. If changes need to be made to index destinations, navigate to the /opt/sc4s/local/context directory to start. Edit splunk_metadata.csv to review or change the index configuration as required for the data sources utilized in your environment. The key (1st column) in this file uses the syntax vendor_product. Simply replace the index value (the 3rd column) in the desired row with the index appropriate for your Splunk installation. The “Sources” document details the specific vendor_product keys (rows) in this table that pertain to the individual data source filters that are included with SC4S. Other Splunk metadata (e.g. source and sourcetype) can be overriden via this file as well. This is an advanced topic, and further information is covered in the “Log Path overrides” section of the Configuration document.
... View more
Labels
- Labels:
-
configuration
11-24-2020
01:56 PM
After I changed the url: http://splunk:8000/en-US/app/search/search to http://splunk:8000/ja-JP/app/search/search I was able to save a search and use Japanese's characters for the panel title and whatnot. Is this not what you experienced?
... View more
10-11-2019
09:12 AM
yes, in the DBconnect Configuration > Settings > General > JVM Options section you can add
-Djava.io.tmpdir=/app/javatmp to change the temp directory where the are stored. Here you can see I changed it to /app/javatmp.
In my case, the .tmp files were over filling the 2g of my /tmp directory, the default path. When i directed them to a new location with more space, the files clean themselves up after the ingest completes.
... View more
10-11-2019
09:11 AM
yes, in the DBconnect Configuration > Settings > General > JVM Options section you can add
-Djava.io.tmpdir=/app/javatmp to change the temp directory where the are stored. Here you can see I changed it to /app/javatmp.
In my case, the .tmp files were over filling the 2g of my /tmp directory, the default path. When i directed them to a new location with more space, the files clean themselves up after the ingest completes.
... View more
02-27-2018
02:34 PM
I In Windows I went to C:\Program Files\Splunk\var\lib\splunk\modinputs\server\splunk_app_db_connect and gave write permission to the service account. I opened the folder properties and went to the security tab, selected my service account and gave it write permission.
... View more
12-19-2017
10:11 AM
Here is how I did it. I used loadjob to call a specific report and then piped it to a search command that includes tokens
|loadjob savedsearch="admin:search:Table of Clipping and Signal to Noise Ratio activity By UserName" | search UserName=$UserName$
Here is what it looks like in XML Look up loadjob for more info.
<panel>
<input type="text" token="UserName" searchWhenChanged="true">
<label>UserName</label>
<suffix>*</suffix>
<default>*</default>
</input>
<table>
<title>Table of Clipping and Signal to Noise Ratio activity By Group</title>
<search>
**<query>|loadjob savedsearch="admin:search:Table of Clipping and Signal to Noise Ratio activity By UserName" | search UserName=$UserName$</query>**
<earliest>$earliest$</earliest>
<latest>$latest$</latest>
</search>
<option name="count">10</option>
<option name="drilldown">none</option>
<option name="refresh.display">progressbar</option>
</table>
</panel>
... View more
12-19-2017
10:11 AM
Here is how I did it. I used loadjob to call a specific report and then piped it to a search command that includes tokens
|loadjob savedsearch="admin:search:Table of Clipping and Signal to Noise Ratio activity By UserName" | search UserName=$UserName$
Here is what it looks like in XML Look up loadjob for more info.
<panel>
<input type="text" token="UserName" searchWhenChanged="true">
<label>UserName</label>
<suffix>*</suffix>
<default>*</default>
</input>
<table>
<title>Table of Clipping and Signal to Noise Ratio activity By Group</title>
<search>
<query>|loadjob savedsearch="admin:search:Table of Clipping and Signal to Noise Ratio activity By UserName" | search UserName=$UserName$</query>
<earliest>$earliest$</earliest>
<latest>$latest$</latest>
</search>
<option name="count">10</option>
<option name="drilldown">none</option>
<option name="refresh.display">progressbar</option>
</table>
</panel>
... View more
12-19-2017
10:10 AM
Here is how I did it. I used loadjob to call a specific report and then piped it to a search command that includes tokens
|loadjob savedsearch="admin:search:Table of Clipping and Signal to Noise Ratio activity By UserName" | search UserName=$UserName$
Here is what it looks like in XML Look up loadjob for more info.
<panel>
<input type="text" token="UserName" searchWhenChanged="true">
<label>UserName</label>
<suffix>*</suffix>
<default>*</default>
</input>
<table>
<title>Table of Clipping and Signal to Noise Ratio activity By Group</title>
<search>
<query>|loadjob savedsearch="admin:search:Table of Clipping and Signal to Noise Ratio activity By UserName" | search UserName=$UserName$</query>
<earliest>$earliest$</earliest>
<latest>$latest$</latest>
</search>
<option name="count">10</option>
<option name="drilldown">none</option>
<option name="refresh.display">progressbar</option>
</table>
</panel>
... View more
09-11-2017
08:26 AM
what would the csv lookup up for this look like. Can you paste 2-3 lines including the header?
... View more
09-11-2017
08:06 AM
thank you. I think this was the issue. There were thousands of extra directories that , while empty, would keep the tailingprocessor busy.
... View more